Showing preview only (237K chars total). Download the full file or copy to clipboard to get everything.
Repository: ses4255/Versatile-OCR-Program
Branch: main
Commit: 6d4337845a05
Files: 16
Total size: 228.0 KB
Directory structure:
gitextract_quvo_ddo/
├── .gitignore
├── LICENSE
├── README.md
├── patch_notes/
│ └── v2.0_initial_patchnotes.md
├── planned_features.md
├── setup_guide.md
├── v1.0_initial/
│ ├── Dockerfile
│ ├── advanced_ocr.py
│ ├── custom_doclayout_yolo.py
│ ├── ocr_stage1.py
│ └── ocr_stage2.py
└── v2.0_initial/
├── Dockerfile
├── advanced_ocr.py
├── custom_doclayout_yolo.py
├── ocr_stage1.py
└── ocr_stage2.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc
================================================
FILE: LICENSE
================================================
Copyright (C) 2025 Eunsoo Seo
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, you can find it here:
https://www.gnu.org/licenses/agpl-3.0.html
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
================================================
FILE: README.md
================================================
# OCR System Optimized for Machine Learning: Figures, Diagrams, Tables, Math & Multilingual Text
---
### 🚀 **COMING SOON: Next-Level AI Pipeline Integration**
**This OCR project is just the beginning.**
In less than **1 month**, a powerful new system will be released:
> **A customizable AI pipeline with memory — tailored to *your* field.**
Whether you're a **student**, **researcher**, or **developer**,
you’ll be able to build your own smart, memory-enhanced AI —
without needing deep AI knowledge.
### UPDATE: Release Slightly Delayed
First of all, thank you so much for your interest in this project.
I had originally planned to release the first version of the AI pipeline before June.
But to be honest, I've been juggling a major academic commitment (a critical exam on June 15) and development at the same time — and it's been tougher than I expected.
Rather than rushing out something incomplete, I’ve decided to take a bit more time to ensure the release is genuinely useful, stable, and worth your time.
This whole system — including the multi-modal OCR — actually started as a tool to help with my own studies.
I didn't expect it to get this much attention, so thanks.
Since I'm the first user, I want to make sure it's something I’d actually want to use before releasing it.
Development will resume after the exam, and the public release will follow once the system is truly ready.
Thanks again for your patience — I really appreciate it.
---
## Overview
This OCR system is specifically designed to extract structured data from complex educational materials—such as exam papers—in a format optimized for machine learning (ML) training.
It supports multilingual text, mathematical formulas, tables, diagrams, and charts, making it ideal for creating high-quality training datasets.
## Key Features
– Optimized for ML Training: Extracted elements such as diagrams, tables, and figures are semantically annotated with contextual explanations.
This includes automatic generation of natural language descriptions for visual content (e.g., “This figure shows the process of mitosis in four stages”) to enhance downstream model training.
– Multilingual Support: Works with Japanese, Korean, and English, and can be easily customized for additional languages.
– Structured Output: Generates AI-ready outputs in JSON or Markdown, including human-readable descriptions of mathematical expressions, table summaries, and figure captions.
– High Accuracy: Achieves over 90–95% accuracy on real-world academic datasets such as EJU Biology and UTokyo Math.
– Complex Layout Support: Accurately processes exam-style PDFs with dense scientific content, formula-heavy paragraphs, and rich visual elements.
– Built With: DocLayout-YOLO, Google Vision API, Gemini Pro Vision, MathPix OCR, OpenAI API, OpenCV, and more.
# Sample Outputs
Below are actual examples of outputs generated by this system using real-world materials (2017 EJU Biology & 2014 University of Tokyo Math), including English-translated semantic context and extracted data.
**Math Input**

**Output**

**English-translated outputs**
Question 1. Consider the rectangular prism OABC–DEFG with a square base of side length 1. Points P, Q, R are on the segments AE, BF, and CG, respectively, and four points O, P, Q, and R lie on the same plane. Let S be the area of quadrilateral OPQR. Also, let ∠AOP be α and ∠COR be β. (2) If α + β = 1 and S = S, find the value of tan α + tan β. Also, if α ≤ β, find the value of tan α.
[Image Start]
Image description:
This image shows the rectangular prism OAB–CDEFGQ. Each vertex is labeled with alphabets. The angle α is marked on face OAB. The plane ORPQ intersects the prism and is highlighted. Line RC lies on face ODCG, and line PB lies on face ABFQ.
Educational value:
This image enhances spatial reasoning by visualizing 3D geometry and cross-sections. It helps learners understand concepts such as plane geometry, solid shapes, spatial visualization, and angles.
Related topics:
Solid geometry, cross-sections, prism faces, triangle, spatial reasoning
Exam relevance:
This type of question appears in entrance exams like:
1. Calculate the area of ORPQ using angle α
2. Find the lengths of OR, RP, PQ, QO
3. Determine the angle between ORPQ and the prism's face
4. Locate points P, Q, R in coordinate space
5. Calculate volume/area of the prism parts
6. Predict shapes based on constraints
7. Sketch the shape of the prism
[Image End]
**Biology Input**

**Output**

**English-translated outputs**
Question 39. The photo shows the mitotic cell division process (somatic cell division) of an onion root tip. Cells A–D are in different stages of division. Match the stages (prophase, metaphase, anaphase, telophase) to each cell and select the correct combination from options ①–⑧.
[Image Start]
Image description:
This image shows the process of plant cell division observed under a microscope. Various cells are in different mitotic phases, including chromosomes aligned at the center (metaphase), separating to poles (anaphase), or forming daughter nuclei (telophase).
A – appears to be in anaphase
B – possibly telophase
C – prophase or prometaphase
D – metaphase
Educational value:
This helps students visually understand the process of mitosis, reinforcing knowledge of cell division phases and their characteristics. It connects to biology concepts like DNA replication, cancer biology, and genetics.
Related topics:
Mitosis, Cell cycle, Prophase, Metaphase, Anaphase, Telophase, DNA replication
Exam relevance:
This image is used in questions such as:
1. Match A, B, C, D to appropriate mitotic phases
2. Describe characteristics of each phase
3. Explain the significance of mitosis
4. Discuss how errors in mitosis lead to genetic diseases
[Image End]
[Table Start]
| 前期 | 中期 | 後期 |
|------|------|------|
| A | C | D |
| A | D | B |
| B | C | C |
| B | D | C |
| C | A | D |
| C | D | A |
| D | A | B |
| D | C | A |
Summary:
Each option (①–⑧) corresponds to a specific mapping of A, B, C, D to prophase, metaphase, and anaphase.
Educational value:
Understanding time-based transition in mitosis and data organization in tables. Enhances data interpretation, pattern recognition, and analysis skills.
Related topics:
Data analysis, table interpretation, biological data classification
[Table End]
## Usage Workflow
1. Step 1 – Initial OCR Extraction
Run ocr_stage1.py to extract raw elements (text, tables, figures, etc.) from input PDFs.
This step performs layout detection and stores intermediate results (e.g., coordinates, cropped images, raw content).
2. Step 2 – Semantic Interpretation & Final Output
Run ocr_stage2.py to process the intermediate data and convert it into structured, human-readable output.
This includes generating natural-language explanations, summaries, and organizing content into AI-ready formats (JSON/Markdown).
## Technical Implementation
– Table Processing OptimizationTable regions are detected using DocLayout-YOLO
– Google Vision OCR is used for table processing instead of MathPix for better accuracy with Japanese text
– Table structures are preserved in structured JSON format (maintaining row/column structure)
– Y-coordinate information is maintained to ensure contextual continuity
– Original layout information is preserved alongside structured data for ML training
– Image and Special Region ProcessingImage regions are processed using Google Vision API's image analysis features (imageProperties, labelDetection, textDetection)
– Image descriptions are generated using Google Cloud Vision API
– Graphs/charts are processed using Google Cloud Vision API's document analysis features with data point extraction
– Special region processing results are stored in structured JSON format for ML training
– Original coordinate information and region type metadata are added to maintain contextual continuity
## Purpose and Contact
This OCR system is an open project, and I’d love to see others improve or build upon it. Continuous updates and community-driven enhancements are the goal.
If you’re interested in custom AI tools or would like to collaborate on an AI-related project, feel free to reach out via email:
**Email**: [ses425500000@gmail.com](mailto:ses425500000@gmail.com)
## License
This project is now licensed under the GNU Affero General Public License v3.0 (AGPL-3.0),
in compliance with the original license of the DocLayout-YOLO model used in this repository.
Please note that any derivative or deployed version (including as a web service)
must also publicly share its complete source code.
More details: https://www.gnu.org/licenses/agpl-3.0.html
See the [LICENSE](./LICENSE) file for full terms.
⸻
_Note: The English translations in the examples were manually reformatted for clarity and consistency. Please treat them as reference only, as structure and layout may differ slightly from the original._
_Keywords: OCR, exam OCR, table recognition, diagram OCR, AI education tools, OpenAI, Gemini Pro Vision, multilingual OCR, DocLayout-YOLO, Machine Learning, educational ML dataset, research OCR, paper OCR, document AI
================================================
FILE: patch_notes/v2.0_initial_patchnotes.md
================================================
# v2.0_initial Update
**Fix Docker permission instability + optimize memory usage in advanced_ocr.py**
⸻
### Summary
This patch brings two major improvements to the **Versatile-OCR-Program**:
1. **Fixes a Docker permission loss issue on Vertex AI / Jupyter environments**
2. **Optimizes memory usage in `advanced_ocr.py` to handle large, image-heavy PDFs more efficiently**
---
### [1] Fix: Docker Permission Instability After Kernel Interruptions
**Problem**
- Docker commands would fail with `Permission denied`, after a Jupyter kernel interruption (due to memory spikes or manual stops).
**Root Cause**
- The `jupyter` user was not persistently recognized as a member of the `docker` group unless the machine was rebooted.
- This behavior is specific to Jupyter-based environments (e.g., Vertex AI, Colab Pro VMs) where group permissions are reset per session.
**Failed Attempts**
- Adding `sudo` inside `subprocess.run()` failed due to the absence of a TTY.
- Using `shell=True` caused unpredictable behavior and was ultimately removed.
**Final Fix**
- The `jupyter` user was permanently added to the `docker` group:
```bash
sudo usermod -aG docker jupyter
sudo reboot
• All subprocess calls to Docker now use plain docker run without sudo.
Impact
• Prevents permission loss on session or kernel restart.
• Ensures stable and persistent Docker access inside Jupiter Notebooks.
• Simplifies code and avoids reliance on elevated permissions.
⸻
[2] Feature: Memory Optimization in advanced_ocr.py
The advanced_ocr.py module was refactored to significantly reduce memory usage without changing core functionality or output format.
Key Optimizations:
1. Garbage Collection
• Added gc.collect() after large memory operations.
• Imported the gc module for explicit cleanup.
2. Image Processing
• Resized large images before feeding them into OCR pipelines.
• Applied JPEG compression with quality 85 to reduce in-memory buffer size.
• Used downscaled thumbnails for hash operations.
• Released all image buffers immediately after use.
3. Memory Management
• Explicitly used del to release large objects.
• Used .copy() after cropping to avoid memory leaks from image views.
• Switched to page-by-page PDF parsing instead of loading entire files.
4. Efficient String Building
• Replaced inefficient += concatenations with list-based string assembly using ''.join().
• Split large text blocks into smaller, manageable chunks.
5. API Handling Improvements
• Reduced request payload size for external API calls (e.g., Gemini).
• Cleaned up response objects immediately after use to free memory.
Impact
• Handles high-resolution, multi-page PDFs (100–200+ pages) without exceeding memory limits.
• Prevents kernel crashes on large inputs.
• Keeps behavior and output identical to the original.
⸻
Files Affected
• ocr_stage1.py (Docker execution logic)
• advanced_ocr.py (OCR core logic)
⸻
Recommendation
Use this update if you’re running the Versatile-OCR-Program in a Jupyter-based cloud environment (e.g., Vertex AI, GCP Notebook, Colab Pro).
It ensures both system stability and memory efficiency — especially when processing large, image-rich PDF documents.
================================================
FILE: planned_features.md
================================================
# Planned Feature
**Image Embedding via OpenAI CLIP**
Image embedding using OpenAI CLIP will be added alongside the current natural language descriptions generated by Gemini Pro Vision.
Since the project integrates multiple APIs, all new components will be thoroughly tested to ensure stability before release.
This update is scheduled for an upcoming version.
**Full local pipeline support (no API key needed)**
Currently, some components (e.g. OpenAI, MathPix) rely on external APIs. The final goal is to replace all of them with local alternatives. Planned replacements include:
• Tesseract or TrOCR for general OCR
• Pix2Struct, Donut, or DocTR for document layout analysis
• CLIP or similar models for image-text semantic alignment
• LLaMA, Gemma, Mistral, Phi, etc. for reasoning and QA
**Prompt injection prevention & hallucination mitigation**
To reduce risks from prompt injection and hallucinations common in LLMs, the system will adopt structured improvements:
• Input/output validation with JSON Schema or Pydantic
• Isolated inference per module and context separation
• Fact-checking pass to detect and filter hallucinated output
• Structural prompt design separating instruction from data
• Offline-friendly deployment
A fully self-contained version with all models and dependencies bundled will be released, allowing secure use in air-gapped or sensitive environments.
================================================
FILE: setup_guide.md
================================================
# OCR System Setup Guide
This guide provides step-by-step instructions for setting up the EJU OCR system, including environment configuration, NVIDIA setup, API key requirements, and file organization.
By default,the files are stored in the user’s directory (/home/jupyter), but you should modify the path according to your own environment.
**Important update**
If you are using the v2.0_initial version, please enter the following bash code in your terminal.
```bash
sudo usermod -aG docker jupyter
sudo reboot
```
## 1. Environment File Setup
Create a `.env` file in your project directory with the following content. Replace the placeholder values with your actual API keys and credentials:
```
OPENAI_API_KEY=your_openai_api_key_here
MATHPIX_APP_ID=your_mathpix_app_id_here
MATHPIX_APP_KEY=your_mathpix_app_key_here
GOOGLE_SHEETS_SPREADSHEET_ID=your_google_sheets_id_here
GOOGLE_APPLICATION_CREDENTIALS=/home/jupyter/credentials/Vision_S.Account.json
GEMINI_API_KEY=your_gemini_api_key_here
```
## 2. Required Python Packages
Install the required Python packages:
```bash
pip install google-genai
pip install openai
```
## 3. NVIDIA Setup
Follow these steps to set up NVIDIA for GPU acceleration:
### 3.1. Install NVIDIA Container Toolkit
```bash
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
```
### 3.2. Configure Docker to Use NVIDIA Runtime
Check if the Docker daemon configuration file exists:
```bash
cat /etc/docker/daemon.json
```
If the file doesn't exist or doesn't contain NVIDIA runtime configuration, create or edit it:
```bash
sudo nano /etc/docker/daemon.json
```
Add the following content (make sure to maintain proper indentation):
```json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
```
### 3.3. Verify GPU Recognition
Test if Docker can access the GPU:
```bash
docker run --gpus all nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04 nvidia-smi
```
### 3.4. Check CUDA Version
Verify the CUDA version:
```bash
docker run --gpus all --rm nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04 nvcc --version
```
If both commands display output without errors, your NVIDIA setup is complete!
## 4. API Key Requirements
You need to obtain API keys from the following services:
1. **OpenAI API Key**: Register at [OpenAI Platform](https://platform.openai.com/) to get your API key.
2. **Gemini API Key**: Get your API key from [Google AI Studio](https://makersuite.google.com/).
3. **MathPix API Key and App ID**: Register at [MathPix](https://mathpix.com/) to get your API key and App ID.
4. **Google Cloud Service Account**: Create a service account with Vision API and Storage permissions in the [Google Cloud Console](https://console.cloud.google.com/).
## 5. File Organization
The following files must be in the same directory (e.g., in a `docker` folder):
- `Dockerfile`
- `advanced_ocr.py`
- `custom_doclayout_yolo.py`
## 6. Google Cloud Storage (GCS) Bucket Setup
1. Create a GCS bucket in the [Google Cloud Console](https://console.cloud.google.com/storage/browser).
2. Make sure your service account has the necessary permissions to access this bucket.
3. Update the `GCS_BUCKET_NAME` environment variable in your `.env` file with your bucket name.
## 7. Credentials Setup
Create a `credentials` directory to store your Google service account JSON files:
```bash
mkdir -p /home/jupyter/credentials
```
Place your service account JSON files in this directory:
- `Vision_S.Account.json` - For Google Vision API
- `Sheets_S.Account.json` - For Google Sheets API
## 8. Running the OCR System
After completing all the setup steps, you can run the OCR system using the Docker container:
```bash
python ocr_stage1.py
```
This will:
1. Build the Docker image if it doesn't exist
2. Mount the input, output, and credentials directories
3. Run the OCR processing on your PDF files
## Troubleshooting
- If you encounter GPU-related errors, make sure your NVIDIA drivers are properly installed and compatible with the CUDA version.
- If API calls fail, verify that your API keys are correctly set in the `.env` file.
- For Docker-related issues, check that the Docker daemon is running and properly configured for NVIDIA runtime.
## Additional Notes
- The OCR system processes PDF files from the input directory specified in the `OCR_stage1.py` script.
- Results are saved to the output directory and also uploaded to your GCS bucket.
- To customize the output language, modify the prompt templates in the OCR scripts.
================================================
FILE: v1.0_initial/Dockerfile
================================================
###############################################################################
# Dockerfile for GPU-based Python environment with DocLayout-YOLO (HEAD)
# - CUDA 11.8 + cuDNN 8 + Ubuntu 20.04
# - Python 3.9 (via deadsnakes)
# - Timezone: Asia/Seoul (can be changed)
# - NumPy <2.0 (1.24.3)
# - Patched DocLayout-YOLO (latest HEAD) to remove 'init_subclass' keyword argument
###############################################################################
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
# NVIDIA settings
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Seoul
# 1) Install Packages
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y \
software-properties-common \
wget \
git \
build-essential \
poppler-utils \
libgl1-mesa-glx \
libglib2.0-0 \
tzdata \
python3.9 \
python3.9-distutils \
python3.9-dev && \
ln -fs /usr/share/zoneinfo/Asia/Seoul /etc/localtime && \
echo "Asia/Seoul" > /etc/timezone && \
dpkg-reconfigure --frontend noninteractive tzdata && \
rm -rf /var/lib/apt/lists/*
# 2) Install pip
RUN wget https://bootstrap.pypa.io/get-pip.py -O /tmp/get-pip.py && \
python3.9 /tmp/get-pip.py && \
rm /tmp/get-pip.py
# 3) Create symbolic links for python3 and pip
RUN ln -sf /usr/bin/python3.9 /usr/local/bin/python && \
ln -sf /usr/local/bin/pip /usr/local/bin/pip3
# 4) Set working directory
WORKDIR /app
# 5) Upgrade pip, setuptools, and wheel
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
# 6) Install PyTorch & TorchVision (e.g., 2.0.1 + cu118)
RUN pip install --no-cache-dir \
torch==2.0.1 \
torchvision==0.15.2 \
--index-url https://download.pytorch.org/whl/cu118
# 7) Install NumPy and other Python dependencies
RUN pip install --no-cache-dir \
numpy==1.26.4 \
Pillow==9.4.0 \
opencv-python==4.7.0.72 \
pdf2image==1.16.3 \
requests==2.31.0 \
huggingface_hub==0.19.4 \
google-cloud-storage==2.9.0 \
google-cloud-vision==3.4.0 \
PyYAML==6.0.1 \
ultralytics==8.0.196 \
protobuf==3.20.3
RUN pip install google-genai
# 8) Clone the latest HEAD version of DocLayout-YOLO
RUN git clone https://github.com/opendatalab/DocLayout-YOLO.git /app/doclayout-yolo
WORKDIR /app/doclayout-yolo
RUN git checkout main
RUN pip install --no-cache-dir -e .
# 9) Patch: Remove 'init_subclass' keyword argument from YOLOv10
RUN sed -i \
's/class YOLOv10(Model, PyTorchModelHubMixin, repo_url=.*$/class YOLOv10(Model, PyTorchModelHubMixin):/' \
/app/doclayout-yolo/doclayout_yolo/models/yolov10/model.py
# 10) Switch back to /app directory
WORKDIR /app
# 11) Copy custom_doclayout_yolo.py and advanced_ocr.py
COPY custom_doclayout_yolo.py /app/custom_doclayout_yolo.py
COPY advanced_ocr.py /app/advanced_ocr.py
# 12) Define mountable volumes
VOLUME ["/app/input", "/app/output", "/app/credentials"]
# 13) Set environment variables
ENV PYTHONUNBUFFERED=1
ENV GOOGLE_APPLICATION_CREDENTIALS=/app/credentials/YOUR_Google_Vision_S.Account.json
ENV PDF_FOLDER=/app/input
ENV OUTPUT_FOLDER=/app/output
ENV GCS_BUCKET_NAME=YOUR_GCS_BUCKET_NAME
ENV MATHPIX_APP_ID="YOUR_MATHPIX_APP_ID"
ENV MATHPIX_APP_KEY="YOUR_MATHPIX_APP_KEY"
ENV PYTHONPATH=/app:/app/doclayout-yolo
# 14) CMD: Run advanced_ocr.py with --input /app/input to process all PDFs in that directory
CMD ["python", "/app/advanced_ocr.py", "--input", "/app/input"]
================================================
FILE: v1.0_initial/advanced_ocr.py
================================================
import os
import cv2
import numpy as np
import json
import time
import hashlib
import base64
import requests
import io
import tempfile
from datetime import datetime
from google.cloud import storage
from google import genai
from google.genai import types
from PIL import Image
class AdvancedOCR:
def __init__(self, model_path=None, confidence_threshold=0.5, use_cache=True, cache_dir='cache'):
"""
Initialize advanced OCR processing class
Args:
model_path (str): DocLayout-YOLO model path
confidence_threshold (float): Detection confidence threshold
use_cache (bool): Whether to use caching
cache_dir (str): Cache directory path
"""
self.model_path = model_path
self.confidence_threshold = confidence_threshold
self.use_cache = use_cache
self.cache_dir = cache_dir
# Create cache directory
if self.use_cache and not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir)
# Load DocLayout-YOLO model
try:
from custom_doclayout_yolo import DocLayoutYOLO
self.doc_layout_model = DocLayoutYOLO(model_path=self.model_path)
print("DocLayout-YOLO model loaded successfully")
except Exception as e:
print(f"Failed to load DocLayout-YOLO model: {e}")
self.doc_layout_model = None
# Set up Gemini API
self._setup_gemini_api()
# Initialize Google Cloud Storage client
self._setup_gcs_client()
def _setup_gemini_api(self):
"""Set up Gemini API"""
# Get API key from environment variable
api_key = os.environ.get("GEMINI_API_KEY", "")
if api_key:
# Initialize latest Gemini API client
self.gemini_client = genai.Client(api_key=api_key)
print("Gemini API client initialized successfully")
else:
self.gemini_client = None
print("Warning: GEMINI_API_KEY environment variable not set")
def _setup_gcs_client(self):
"""Initialize Google Cloud Storage client"""
try:
# Get service account info from environment variable
SERVICE_ACCOUNT_JSON = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
self.BUCKET_NAME = os.environ.get("GCS_BUCKET_NAME", "YOUR_GCS_BUCKET_NAME")
if SERVICE_ACCOUNT_JSON:
from google.oauth2.service_account import Credentials
creds = Credentials.from_service_account_file(SERVICE_ACCOUNT_JSON)
self.storage_client = storage.Client(credentials=creds, project=creds.project_id)
print("Google Cloud Storage client initialized successfully")
else:
self.storage_client = None
print("Warning: GOOGLE_APPLICATION_CREDENTIALS environment variable not set")
except Exception as e:
self.storage_client = None
print(f"Failed to initialize Google Cloud Storage client: {e}")
def _calculate_image_hash(self, image):
"""
Calculate image hash
Args:
image (numpy.ndarray): Image to calculate hash for
Returns:
str: Image hash string
"""
# Convert image to bytes
_, buffer = cv2.imencode('.png', image)
# Calculate hash
image_hash = hashlib.md5(buffer).hexdigest()
return image_hash
def _get_cached_result(self, image_hash, cache_type):
"""
Get cached result
Args:
image_hash (str): Image hash
cache_type (str): Cache type (e.g., 'ocr', 'layout')
Returns:
dict or None: Cached result or None (cache miss)
"""
if not self.use_cache:
return None
cache_file = os.path.join(self.cache_dir, f"{cache_type}_{image_hash}.json")
if os.path.exists(cache_file):
try:
with open(cache_file, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
print(f"Error loading cache file: {e}")
return None
def _save_to_cache(self, image_hash, cache_type, result):
"""
Save result to cache
Args:
image_hash (str): Image hash
cache_type (str): Cache type (e.g., 'ocr', 'layout')
result (dict): Result to save
"""
if not self.use_cache:
return
cache_file = os.path.join(self.cache_dir, f"{cache_type}_{image_hash}.json")
try:
with open(cache_file, 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=2)
except Exception as e:
print(f"Error saving to cache: {e}")
def _detect_with_doclayout_yolo(self, image_np):
"""
Detect document layout using DocLayout-YOLO
Args:
image_np (numpy.ndarray): Input image
Returns:
list: List of detected regions
"""
# Calculate image hash
image_hash = self._calculate_image_hash(image_np)
# Check cache
cached_result = self._get_cached_result(image_hash, 'layout')
if cached_result is not None:
return cached_result
# Return empty result if DocLayout-YOLO model is not initialized
if self.doc_layout_model is None:
print("DocLayout-YOLO model not initialized")
return []
# Detect with DocLayout-YOLO
try:
# Save image to temporary file
with tempfile.NamedTemporaryFile(suffix=".jpg", delete=False) as temp_file:
temp_path = temp_file.name
cv2.imwrite(temp_path, image_np)
# Use predict method
results = self.doc_layout_model.predict(temp_path, conf=0.25)
# Filter and format results
regions = []
if results and len(results) > 0:
result = results[0]
if hasattr(result, 'boxes') and result.boxes is not None:
boxes = result.boxes.xyxy.cpu().numpy()
classes = result.boxes.cls.cpu().numpy()
confs = result.boxes.conf.cpu().numpy()
class_names = result.names
for i, (box, cls_id, conf) in enumerate(zip(boxes, classes, confs)):
x1, y1, x2, y2 = map(int, box)
cls_name = class_names[int(cls_id)]
if conf >= self.confidence_threshold:
regions.append({
'type': cls_name,
'coords': [int(x1), int(y1), int(x2-x1), int(y2-y1)],
'confidence': float(conf)
})
# Delete temporary file
os.unlink(temp_path)
# Merge overlapping regions
regions = self._merge_overlapping_regions(regions)
# Save to cache
self._save_to_cache(image_hash, 'layout', regions)
return regions
except Exception as e:
print(f"DocLayout-YOLO detection error: {e}")
return []
def _merge_overlapping_regions(self, regions):
"""
Merge duplicate or overlapping regions
Args:
regions (list): List of regions to merge
Returns:
list: List of merged regions
"""
if len(regions) <= 1:
return regions
# Function to calculate IoU
def calculate_iou(box1, box2):
# Extract box coordinates
x1, y1, w1, h1 = box1['coords']
x2, y2, w2, h2 = box2['coords']
# Calculate box endpoints
x1_end, y1_end = x1 + w1, y1 + h1
x2_end, y2_end = x2 + w2, y2 + h2
# Calculate intersection area
x_inter = max(0, min(x1_end, x2_end) - max(x1, x2))
y_inter = max(0, min(y1_end, y2_end) - max(y1, y2))
area_inter = x_inter * y_inter
# Calculate union area
area1 = w1 * h1
area2 = w2 * h2
area_union = area1 + area2 - area_inter
# Calculate IoU
if area_union == 0:
return 0
return area_inter / area_union
# Mark regions to keep
to_keep = [True] * len(regions)
# Check for duplicate regions
for i in range(len(regions)):
if not to_keep[i]:
continue
for j in range(i+1, len(regions)):
if not to_keep[j]:
continue
# Consider as duplicate if same class and IoU above threshold
if regions[i]['type'] == regions[j]['type'] and calculate_iou(regions[i], regions[j]) > 0.5:
# Remove the one with lower confidence
if regions[i]['confidence'] < regions[j]['confidence']:
to_keep[i] = False
break
else:
to_keep[j] = False
# Return only non-duplicate regions
filtered_regions = []
for i in range(len(regions)):
if to_keep[i]:
filtered_regions.append(regions[i])
return filtered_regions
def _detect_regions(self, image_np):
"""
Detect special regions in image
Args:
image_np (numpy.ndarray): Input image
Returns:
list: List of detected regions
"""
# Detect regions with DocLayout-YOLO
regions = self._detect_with_doclayout_yolo(image_np)
# If no regions, treat entire image as text region
if not regions:
height, width = image_np.shape[:2]
regions = [{
'type': 'text',
'coords': [0, 0, width, height],
'confidence': 1.0
}]
# Sort regions by Y coordinate
regions.sort(key=lambda r: r['coords'][1])
return regions
def _crop_region(self, image, region):
"""
Extract region from image
Args:
image (numpy.ndarray): Original image
region (dict): Region information
Returns:
numpy.ndarray: Extracted region image
"""
x, y, w, h = region['coords']
# Adjust coordinates if they exceed image boundaries
x = max(0, x)
y = max(0, y)
w = min(w, image.shape[1] - x)
h = min(h, image.shape[0] - y)
return image[y:y+h, x:x+w]
def _process_text_region(self, region_img, region_info):
"""
Process text region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'text_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Call Google Vision OCR API
try:
# Encode image as base64
_, buffer = cv2.imencode('.png', region_img)
encoded_image = base64.b64encode(buffer).decode('utf-8')
# Prepare API request data
request_data = {
'requests': [
{
'image': {
'content': encoded_image
},
'features': [
{
'type': 'TEXT_DETECTION'
}
],
'imageContext': {
'languageHints': ['ja', 'en', 'ko']
}
}
]
}
# API call (using service account credentials)
from google.cloud import vision
from google.oauth2.service_account import Credentials
SERVICE_ACCOUNT_JSON = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
if SERVICE_ACCOUNT_JSON:
creds = Credentials.from_service_account_file(SERVICE_ACCOUNT_JSON)
vision_client = vision.ImageAnnotatorClient(credentials=creds)
image = vision.Image(content=buffer.tobytes())
context = vision.ImageContext(language_hints=['ja', 'en', 'ko'])
response = vision_client.text_detection(image=image, image_context=context)
text = ''
if response.text_annotations:
text = response.text_annotations[0].description
processed_result = {
'type': 'text',
'coords': region_info['coords'],
'text': text
}
# Save to cache
self._save_to_cache(image_hash, 'text_ocr', processed_result)
return processed_result
else:
# API key method (alternative)
response = requests.post(
'https://vision.googleapis.com/v1/images:annotate',
params={'key': os.environ.get('GOOGLE_VISION_API_KEY', '')},
json=request_data
)
# Process response
if response.status_code == 200:
result = response.json()
text = ''
# Extract text
if 'responses' in result and result['responses'] and 'fullTextAnnotation' in result['responses'][0]:
text = result['responses'][0]['fullTextAnnotation']['text']
processed_result = {
'type': 'text',
'coords': region_info['coords'],
'text': text
}
# Save to cache
self._save_to_cache(image_hash, 'text_ocr', processed_result)
return processed_result
else:
print(f"Google Vision API error: {response.status_code} {response.text}")
except Exception as e:
print(f"Text region processing error: {e}")
# Return empty result on error
return {
'type': 'text',
'coords': region_info['coords'],
'text': ''
}
def _process_table_region(self, region_img, region_info):
"""
Process table region (using Gemini API)
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'table_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Process table with Gemini API
try:
# Process as text region if Gemini client is not initialized
if self.gemini_client is None:
print("Gemini API client not initialized. Processing as text region.")
return self._process_text_region(region_img, region_info)
# Convert image to PIL format
pil_image = Image.fromarray(cv2.cvtColor(region_img, cv2.COLOR_BGR2RGB))
# Create prompt
prompt = """
Analyze this table and respond in the following format:
1. Accurately reproduce the table structure in markdown format. Clearly distinguish each column and row, and use line breaks appropriately to make the table structure visually clear.
2. Provide a brief summary of the table content.
3. Explain the educational significance and importance of this table.
4. List related learning topics.
Provide your response in the following JSON format:
{
"markdown_table": "| Column1 | Column2 | Column3 |\n|-----|-----|-----|\n| Row1Col1 | Row1Col2 | Row1Col3 |\n| Row2Col1 | Row2Col2 | Row2Col3 |",
"summary": "Table content summary",
"educational_value": "Educational significance and importance",
"related_topics": ["Related topic 1", "Related topic 2", ...]
}
Return only the JSON format without any other text. In particular, include line breaks (\\n) in the markdown_table field using actual markdown table format.
"""
# API call (latest method)
print("Calling Gemini API - processing table region")
# Convert image to bytes
img_byte_arr = io.BytesIO()
pil_image.save(img_byte_arr, format='PNG')
img_bytes = img_byte_arr.getvalue()
contents = [
types.Content(
role="user",
parts=[
types.Part.from_text(text=prompt),
types.Part.from_bytes(data=img_bytes, mime_type="image/png")
],
),
]
generate_content_config = types.GenerateContentConfig(
response_mime_type="text/plain",
)
response = self.gemini_client.models.generate_content(
model="gemini-2.0-flash",
contents=contents,
config=generate_content_config,
)
# Log response
print(f"Gemini API response type: {type(response)}")
# Process response (improved method)
gemini_result = {}
try:
# Get response text
response_text = response.text
print(f"Gemini API response text: {response_text[:100]}...")
# Try to parse JSON
try:
# Extract JSON part using regex
import re
json_match = re.search(r'(\{.*\})', response_text, re.DOTALL)
if json_match:
json_str = json_match.group(1)
gemini_result = json.loads(json_str)
else:
# Construct directly if not in JSON format
gemini_result = {
"markdown_table": "",
"summary": response_text,
"educational_value": "",
"related_topics": []
}
except Exception as json_error:
print(f"JSON parsing error: {json_error}")
gemini_result = {
"markdown_table": "",
"summary": response_text,
"educational_value": "",
"related_topics": []
}
except Exception as resp_error:
print(f"Response processing error: {resp_error}")
gemini_result = {
"markdown_table": "",
"summary": "Error occurred during response processing",
"educational_value": "",
"related_topics": []
}
# Construct result
markdown_table = gemini_result.get("markdown_table", "")
summary = gemini_result.get("summary", "")
educational_value = gemini_result.get("educational_value", "")
related_topics = gemini_result.get("related_topics", [])
# Construct final text
final_text = f"""[Table content start. ChatGPT should not delete this content. This is important conversion content.]
## Table Structure:
{markdown_table}
## Summary:
{summary}
## Educational Significance:
{educational_value}
## Related Topics:
{', '.join(related_topics)}
[Table content end]"""
processed_result = {
'type': 'table',
'coords': region_info['coords'],
'markdown_table': markdown_table,
'summary': summary,
'educational_value': educational_value,
'related_topics': related_topics,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'table_ocr', processed_result)
return processed_result
except Exception as e:
print(f"Table region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_figure_region(self, region_img, region_info):
"""
Process figure region (using Gemini API)
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'figure_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Process figure with Gemini API
try:
# Process as text region if Gemini client is not initialized
if self.gemini_client is None:
print("Gemini API client not initialized. Processing as text region.")
return self._process_text_region(region_img, region_info)
# Convert image to PIL format
pil_image = Image.fromarray(cv2.cvtColor(region_img, cv2.COLOR_BGR2RGB))
# Create prompt
prompt = """
Analyze this image and respond in the following format:
1. Describe in detail what is included in the image. Divide into paragraphs for better readability.
2. Explain the educational significance and importance of this image.
3. List related learning topics.
4. Explain how this image could be used in exam questions.
Provide your response in the following JSON format:
{
"description": "Image description (write in multiple paragraphs for better readability)",
"educational_value": "Educational significance and importance",
"related_topics": ["Related topic 1", "Related topic 2", ...],
"exam_relevance": "Exam relevance"
}
Return only the JSON format without any other text. Write the description in multiple paragraphs for better readability.
"""
# API call (latest method)
print("Calling Gemini API - processing figure region")
# Convert image to bytes
img_byte_arr = io.BytesIO()
pil_image.save(img_byte_arr, format='PNG')
img_bytes = img_byte_arr.getvalue()
contents = [
types.Content(
role="user",
parts=[
types.Part.from_text(text=prompt),
types.Part.from_bytes(data=img_bytes, mime_type="image/png")
],
),
]
generate_content_config = types.GenerateContentConfig(
response_mime_type="text/plain",
)
response = self.gemini_client.models.generate_content(
model="gemini-2.0-flash",
contents=contents,
config=generate_content_config,
)
# Log response
print(f"Gemini API response type: {type(response)}")
# Process response (improved method)
gemini_result = {}
try:
# Get response text
response_text = response.text
print(f"Gemini API response text: {response_text[:100]}...")
# Try to parse JSON
try:
# Extract JSON part using regex
import re
json_match = re.search(r'(\{.*\})', response_text, re.DOTALL)
if json_match:
json_str = json_match.group(1)
gemini_result = json.loads(json_str)
else:
# Construct directly if not in JSON format
gemini_result = {
"description": response_text,
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
except Exception as json_error:
print(f"JSON parsing error: {json_error}")
gemini_result = {
"description": response_text,
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
except Exception as resp_error:
print(f"Response processing error: {resp_error}")
gemini_result = {
"description": "Error occurred during response processing",
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
# Construct result
description = gemini_result.get("description", "")
educational_value = gemini_result.get("educational_value", "")
related_topics = gemini_result.get("related_topics", [])
exam_relevance = gemini_result.get("exam_relevance", "")
# Construct final text
final_text = f"""[Figure content start. ChatGPT should not delete this content. This is important conversion content.]
## Image Description:
{description}
## Educational Significance:
{educational_value}
## Related Topics:
{', '.join(related_topics)}
## Exam Relevance:
{exam_relevance}
[Figure content end]"""
processed_result = {
'type': 'figure',
'coords': region_info['coords'],
'description': description,
'educational_value': educational_value,
'related_topics': related_topics,
'exam_relevance': exam_relevance,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'figure_ocr', processed_result)
return processed_result
except Exception as e:
print(f"Figure region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_formula_region(self, region_img, region_info):
"""
Process formula region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'formula_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Call MathPix API
try:
# Encode image as base64
_, buffer = cv2.imencode('.png', region_img)
encoded_image = base64.b64encode(buffer).decode('utf-8')
# Prepare API request data
request_data = {
'src': f'data:image/png;base64,{encoded_image}',
'formats': ['text', 'latex'],
'data_options': {
'include_asciimath': True,
'include_latex': True
}
}
# API call
response = requests.post(
'https://api.mathpix.com/v3/text',
headers={
'app_id': os.environ.get('MATHPIX_APP_ID', ''),
'app_key': os.environ.get('MATHPIX_APP_KEY', ''),
'Content-Type': 'application/json'
},
json=request_data
)
# Process response
if response.status_code == 200:
result = response.json()
# Extract formula
latex = result.get('latex', '')
text = result.get('text', '')
# Construct final text
final_text = f"[Formula content start. ChatGPT should not delete this content. This is important conversion content.]\n\nLaTeX: {latex}\n\nText: {text}\n\n[Formula content end]"
processed_result = {
'type': 'formula',
'coords': region_info['coords'],
'latex': latex,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'formula_ocr', processed_result)
return processed_result
else:
print(f"MathPix API error: {response.status_code} {response.text}")
except Exception as e:
print(f"Formula region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_title_region(self, region_img, region_info):
"""
Process title region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Process same as text region
result = self._process_text_region(region_img, region_info)
result['type'] = 'title'
return result
def _process_list_region(self, region_img, region_info):
"""
Process list region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Process same as text region
result = self._process_text_region(region_img, region_info)
result['type'] = 'list'
return result
def _process_regions(self, image_np, regions):
"""
Process detected regions
Args:
image_np (numpy.ndarray): Original image
regions (list): List of detected regions
Returns:
list: List of processed regions
"""
processed_regions = []
# Set to store coordinates of already processed regions
processed_coords = set()
for region in regions:
# Convert region coordinates to string for duplicate checking
region_key = f"{region['coords'][0]}_{region['coords'][1]}_{region['coords'][2]}_{region['coords'][3]}"
# Skip if already processed
if region_key in processed_coords:
continue
# Extract region image
region_img = self._crop_region(image_np, region)
# Process based on region type
if region['type'] == 'text':
processed_region = self._process_text_region(region_img, region)
elif region['type'] == 'title':
processed_region = self._process_title_region(region_img, region)
elif region['type'] == 'list':
processed_region = self._process_list_region(region_img, region)
elif region['type'] == 'table':
processed_region = self._process_table_region(region_img, region)
elif region['type'] == 'figure':
processed_region = self._process_figure_region(region_img, region)
elif region['type'] == 'formula':
processed_region = self._process_formula_region(region_img, region)
else:
# Process unknown types as text
processed_region = self._process_text_region(region_img, region)
# Add processed region
processed_regions.append(processed_region)
# Store processed region coordinates
processed_coords.add(region_key)
# Sort by Y coordinate
processed_regions.sort(key=lambda r: r['coords'][1])
return processed_regions
def _combine_processed_regions(self, processed_regions):
"""
Combine processed regions to generate final text
Args:
processed_regions (list): List of processed regions
Returns:
str: Combined text
"""
combined_text = ""
for region in processed_regions:
if 'text' in region and region['text']:
combined_text += region['text'] + "\n\n"
return combined_text.strip()
def _upload_to_gcs(self, data, gcs_path):
"""
Upload results to GCS
Args:
data (dict): Data to upload
gcs_path (str): GCS path
Returns:
bool: Upload success status
"""
if not self.storage_client:
print(f"GCS client not initialized, skipping upload: {gcs_path}")
return False
try:
bucket = self.storage_client.bucket(self.BUCKET_NAME)
blob = bucket.blob(gcs_path)
# Serialize JSON data
json_data = json.dumps(data, ensure_ascii=False, indent=2)
# Upload
blob.upload_from_string(json_data, content_type="application/json")
print(f"GCS upload complete: gs://{self.BUCKET_NAME}/{gcs_path}")
return True
except Exception as e:
print(f"GCS upload error: {e}")
return False
def process_image(self, image_path):
"""
Main image processing function
Args:
image_path (str): Path to image to process
Returns:
dict: Processing results
"""
start_time = time.time()
# Load image
image_np = cv2.imread(image_path)
if image_np is None:
return {'error': f"Cannot load image: {image_path}"}
# Get image dimensions
height, width = image_np.shape[:2]
# Detect regions
regions = self._detect_regions(image_np)
# Process regions
processed_regions = self._process_regions(image_np, regions)
# Combine text
text = self._combine_processed_regions(processed_regions)
# Calculate processing time
processed_time = time.time() - start_time
# Return results
return {
'width': width,
'height': height,
'regions': regions,
'processed_regions': processed_regions,
'text': text,
'region_positions': [region['coords'] for region in processed_regions],
'processed_time': datetime.now().isoformat()
}
def process_pdf(self, pdf_path, output_folder=None):
"""
Process PDF file
Args:
pdf_path (str): PDF file path
output_folder (str): Output folder path
Returns:
dict: Processing results summary
"""
try:
from pdf2image import convert_from_path, pdfinfo_from_path
# Extract PDF filename
pdf_file = os.path.basename(pdf_path)
# Extract subject name (from filename or use default)
subject = pdf_file.replace(".pdf", "").split("_")[-1] if "_" in pdf_file else "Unknown"
print(f"Starting PDF processing: {pdf_file}, Subject: {subject}")
# Read PDF info
pdf_info = pdfinfo_from_path(pdf_path)
num_pages = pdf_info["Pages"]
print(f"PDF page count: {num_pages}")
# Set output folder
if output_folder is None:
output_folder = os.path.join(os.path.dirname(pdf_path), "output")
# Create output folder
os.makedirs(output_folder, exist_ok=True)
# Create subject folder
subject_folder = os.path.join(output_folder, subject)
os.makedirs(subject_folder, exist_ok=True)
# Create PDF name folder
pdf_name = pdf_file.replace(".pdf", "")
pdf_folder = os.path.join(subject_folder, pdf_name)
os.makedirs(pdf_folder, exist_ok=True)
# Convert PDF to images
images = convert_from_path(pdf_path, dpi=300)
print(f"PDF converted to {len(images)} images")
# Process each page
results = []
for i, image in enumerate(images):
print(f"Processing page {i+1}/{len(images)}...")
# Save image
image_path = os.path.join(pdf_folder, f"page_{i+1}.jpg")
image.save(image_path, "JPEG")
# Process image
page_result = self.process_image(image_path)
results.append(page_result)
# Save results
output_path = os.path.join(pdf_folder, f"page_{i+1}.json")
self.save_result(page_result, output_path)
# Upload page results to GCS
gcs_path = f"{subject}/stage1/{pdf_name}/page_{i+1}.json"
self._upload_to_gcs(page_result, gcs_path)
# Create summary results
summary = {
"pdf_name": pdf_name,
"num_pages": num_pages,
"processed_time": datetime.now().isoformat(),
"pages": [{"page": i+1, "status": "processed"} for i in range(len(images))]
}
# Save summary results
summary_path = os.path.join(pdf_folder, "summary_stage1.json")
self.save_result(summary, summary_path)
# Upload summary results to GCS
gcs_summary_path = f"{subject}/stage1/{pdf_name}/summary_stage1.json"
self._upload_to_gcs(summary, gcs_summary_path)
print(f"PDF processing complete: {pdf_file}")
return summary
except Exception as e:
print(f"PDF processing error: {e}")
return {"error": str(e)}
def save_result(self, result, output_path):
"""
Save processing results to JSON file
Args:
result (dict): Processing results
output_path (str): Path to save file
"""
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=2)
# (AdvancedOCR class and other code parts use the definitions above)
if __name__ == "__main__":
import argparse
import os
parser = argparse.ArgumentParser(description='Advanced OCR Processing')
# Required argument: --input (accepts both single file or directory)
parser.add_argument('--input', default='/app/input', help='Input file or directory path (image or PDF)')
# Optional argument: --output
parser.add_argument('--output', help='Output JSON file path (for image) or output folder (for PDF)')
parser.add_argument('--model', default=None, help='DocLayout-YOLO model path')
parser.add_argument('--confidence', type=float, default=0.5, help='Detection confidence threshold')
parser.add_argument('--no-cache', action='store_true', help='Disable caching')
parser.add_argument('--cache-dir', default='cache', help='Cache directory path')
args = parser.parse_args()
# Create OCR processing object
ocr = AdvancedOCR(
model_path=args.model,
confidence_threshold=args.confidence,
use_cache=not args.no_cache,
cache_dir=args.cache_dir
)
input_path = args.input
# If input is a directory, process all PDF files in that directory
if os.path.isdir(input_path):
import glob
pdf_files = glob.glob(os.path.join(input_path, "*.pdf"))
if not pdf_files:
print(f"No PDF files found in input directory {input_path}.")
for pdf_file in pdf_files:
# Output file path: create output folder in the directory containing the PDF file
output_folder = args.output if args.output else os.path.join(os.path.dirname(pdf_file), "output")
result = ocr.process_pdf(pdf_file, output_folder)
ocr.save_result(result, os.path.join(output_folder, os.path.splitext(os.path.basename(pdf_file))[0] + ".json"))
print(f"PDF processing complete: {pdf_file} -> {output_folder}")
else:
# For single file: process as PDF or image based on extension
if input_path.lower().endswith('.pdf'):
output_folder = args.output if args.output else os.path.join(os.path.dirname(input_path), "output")
result = ocr.process_pdf(input_path, output_folder)
print(f"PDF processing complete: {input_path} -> {output_folder}")
else:
result = ocr.process_image(input_path)
output_path = args.output if args.output else input_path.rsplit('.', 1)[0] + '.json'
ocr.save_result(result, output_path)
print(f"Image processing complete: {input_path} -> {output_path}")
# To customize output language, modify the prompt strings in the following methods:
# 1. _process_table_region(): Change the prompt text to your desired language
# 2. _process_figure_region(): Change the prompt text to your desired language
# 3. _process_formula_region(): Change the final_text format to your desired language
# 4. Change the language hints in _process_text_region() from ['ja', 'en', 'ko'] to your preferred languages
# IMPORTANT: Do not make any omissions or changes to the content, abbreviations, or code modifications. This code is already working correctly.
================================================
FILE: v1.0_initial/custom_doclayout_yolo.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
custom_doclayout_yolo.py
- Performs document layout analysis using the DocLayout-YOLO model.
- Updated for compatibility with PyTorch 2.0.1 or higher.
- Loads the model using the officially recommended method (hf_hub_download or from_pretrained).
"""
import os
import torch
import logging
from huggingface_hub import hf_hub_download
from doclayout_yolo import YOLOv10
logger = logging.getLogger(__name__)
class DocLayoutYOLO:
"""DocLayout-YOLO model wrapper class"""
def __init__(self, model_path=None):
"""
Initialize the DocLayout-YOLO model
Args:
model_path (str, optional): Local model file path.
If not provided, the pre-trained model will be loaded from Hugging Face Hub.
"""
self.model_path = model_path
self.model = None
self.device = "cuda:0" if torch.cuda.is_available() else "cpu"
self.init_model()
def init_model(self):
"""Initialize the model"""
try:
if self.model_path and os.path.exists(self.model_path):
# Use the local model file if available
logger.info(f"Loading local model file: {self.model_path}")
self.model = YOLOv10(self.model_path)
else:
# If a local file is not available, download and load the pre-trained model from Hugging Face Hub
logger.info("Loading pre-trained model from Hugging Face (using hf_hub_download)")
filepath = hf_hub_download(
repo_id="juliozhao/DocLayout-YOLO-DocStructBench",
filename="doclayout_yolo_docstructbench_imgsz1024.pt"
)
self.model = YOLOv10(filepath)
# Alternatively, you can use the from_pretrained method as follows:
# self.model = YOLOv10.from_pretrained("juliozhao/DocLayout-YOLO-DocStructBench")
logger.info("DocLayout-YOLO model loaded successfully")
return True
except Exception as e:
logger.error(f"Failed to initialize DocLayout-YOLO model: {e}")
try:
from ultralytics import YOLO
if self.model_path and os.path.exists(self.model_path):
self.model = YOLO(self.model_path)
else:
self.model = YOLO("yolov8n.pt")
logger.info("Successfully loaded ultralytics YOLO model as an alternative")
return True
except Exception as e2:
logger.error(f"Alternative initialization failed: {e2}")
self.model = None
return False
def predict(self, image_path, imgsz=1024, conf=0.25, device=None):
"""
Perform layout prediction on the image.
Args:
image_path (str): Path to the image file.
imgsz (int): Input image size.
conf (float): Confidence threshold.
device (str, optional): Device to use (if None, automatically selected).
Returns:
list: List of prediction results.
"""
if self.model is None:
logger.error("The model is not initialized")
return []
if not os.path.exists(image_path):
logger.error(f"Image file does not exist: {image_path}")
return []
if device is None:
device = self.device
try:
results = self.model.predict(
source=image_path,
imgsz=imgsz,
conf=conf,
device=device
)
return results
except Exception as e:
logger.error(f"Prediction failed: {e}")
return []
================================================
FILE: v1.0_initial/ocr_stage1.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
ML OCR System - Docker Container Execution Version for Vertex AI Notebook (Final Version)
- PDF Input from Host: /home/jupyter/Google Drive/Study Materials/
- GCS Upload: eju-ocr-results/Chemistry/stage1/[pdf_name]/page_{n}.json
"""
import os
import json
import logging
import subprocess
import argparse
import glob
from datetime import datetime
from dotenv import load_dotenv
load_dotenv('/home/jupyter/Your_Folder_Name/.env')
# ----------------------------
# [1] Log Configuration
# ----------------------------
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
# ----------------------------
# [2] Docker Container Execution Function
# ----------------------------
def run_docker_container(input_dir, output_dir, credentials_dir, image_name="shit"):
gemini_api_key = os.environ.get("GEMINI_API_KEY", "")
"""
Run Docker container to perform OCR processing.
Args:
input_dir (str): Host-side PDF file directory path
output_dir (str): Host-side OCR results/logs storage directory path
credentials_dir (str): Host-side Google Cloud credentials directory
image_name (str): Docker image name to use
Returns:
bool: Success status
"""
try:
# Convert to absolute paths
input_dir = os.path.abspath(input_dir)
output_dir = os.path.abspath(output_dir)
credentials_dir = os.path.abspath(credentials_dir)
# Check and create directories
os.makedirs(output_dir, exist_ok=True)
# Check input directory
if not os.path.exists(input_dir):
logger.error(f"Input directory does not exist: {input_dir}")
return False
# Check PDF files
pdf_files = [os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.lower().endswith(".pdf")]
logger.info(f"Number of PDF files found in input directory: {len(pdf_files)}")
if len(pdf_files) > 0:
logger.info(f"PDF file list (max 20): {pdf_files[:20]}")
else:
logger.warning(f"No PDF files in input directory: {input_dir}")
# Continue even if no PDF files
# Check if Docker image exists
result = subprocess.run(
["docker", "images", "-q", image_name],
capture_output=True,
text=True
)
if not result.stdout.strip():
logger.info(f"Docker image '{image_name}' not found. Starting build.")
# Build Docker image (Dockerfile location example)
docker_dir = "/home/jupyter/YOUR_DOCKER_DIRECTORY"
subprocess.run(
["docker", "build", "-t", image_name, docker_dir],
check=True
)
logger.info(f"Docker image '{image_name}' build complete")
# Create command string to handle paths with spaces (added GPU usage)
cmd_str = " ".join([
"docker", "run", "--gpus", "all", "--rm",
"--runtime=nvidia",
"-e NVIDIA_VISIBLE_DEVICES=all",
"-e NVIDIA_DRIVER_CAPABILITIES=compute,utility",
f"-v \"{input_dir}\":/app/input",
f"-v \"{output_dir}\":/app/output",
f"-v \"{credentials_dir}\":/app/credentials",
"-e PDF_FOLDER=/app/input",
"-e OUTPUT_FOLDER=/app/output",
"-e PYTHONUNBUFFERED=1",
f"-e GOOGLE_APPLICATION_CREDENTIALS=/app/credentials/Google_Vision_S.Account.json",
"-e PYTHONPATH=/app:/app/DocLayout-YOLO",
f"-e GEMINI_API_KEY={gemini_api_key}",
image_name,
"python /app/advanced_ocr.py"
])
logger.info(f"Running Docker container: {cmd_str}")
# Use shell=True to handle paths with spaces
subprocess.run(cmd_str, shell=True, check=True)
logger.info("Docker container execution complete")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Docker container execution failed: {e}")
return False
except Exception as e:
logger.error(f"Error occurred: {e}")
return False
# ----------------------------
# [3] Main Function
# ----------------------------
def main():
parser = argparse.ArgumentParser(description="OCR System - Docker (Final Version)")
# Dummy argument to ignore -f argument automatically added by Jupyter/Colab
parser.add_argument("-f", "--somefile", help="(Jupyter) ignore this argument", default=None)
# Existing arguments
parser.add_argument("--input-dir", default="/home/jupyter/Google Drive/Study Materials",
help="Host-side PDF directory for OCR processing (default: /home/jupyter/Google Drive/Study Materials)")
parser.add_argument("--output-dir", default="/home/jupyter/ocr_output",
help="Host-side OCR results/logs directory (default: /home/jupyter/ocr_output)")
parser.add_argument("--credentials-dir", default="/home/jupyter/credentials",
help="Google Cloud credentials directory (default: /home/jupyter/credentials)")
parser.add_argument("--image-name", default="cantaloupe", #You have to change the image name
help="Docker image name to use (default: cantaloupe)")
# Use parse_known_args() to ignore unknown arguments like -f
args, unknown = parser.parse_known_args()
if unknown:
logger.info(f"Ignored arguments: {unknown}")
logger.info("=== OCR System (Docker) Starting ===")
logger.info(f"Input directory (host): {args.input_dir}")
logger.info(f"Output directory (host): {args.output_dir}")
logger.info(f"Credentials directory (host): {args.credentials_dir}")
logger.info(f"Docker image name: {args.image_name}")
success = run_docker_container(
input_dir=args.input_dir,
output_dir=args.output_dir,
credentials_dir=args.credentials_dir,
image_name=args.image_name
)
if success:
logger.info("=== OCR System Complete ===")
else:
logger.error("=== OCR System Failed ===")
if __name__ == "__main__":
main()
# To customize output language, modify the log messages in this file.
# Environment variables are kept as is since they are configuration paths.
# If you need to change the input directory path, modify the default value in the
# --input-dir argument in the main() function.
================================================
FILE: v1.0_initial/ocr_stage2.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
ocr_stage2_final_fixed.py - ML OCR System Stage 2 (ChatGPT Correction)
Features:
1) Load stage1 results from all folders in GCS bucket
2) Use ChatGPT for context-based text correction
- Mark uncertain text with [?]
- Simplify special content tags (formulas, figures, tables, etc.)
- Only correct special content when high error probability
- Remove unnecessary content
3) Save corrected results to stage2 folder at the same level as stage1
4) Skip folders that already have stage2 folder
"""
import os
import re
import json
import logging
import argparse
import difflib
from datetime import datetime
from typing import Dict, List, Any, Tuple, Optional, Set
# OpenAI API
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv("/home/jupyter/Your_Folder_Name/.env")
# Google Cloud Storage
from google.cloud import storage
# Logging configuration
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("ocr_stage2.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
# Environment variables
BUCKET_NAME = os.environ.get("GCS_BUCKET_NAME", "YOUR_GCS_BUCKET_NAME")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Initialize OpenAI client
client = None
if OPENAI_API_KEY:
client = OpenAI(api_key=OPENAI_API_KEY)
logger.info("OpenAI client initialized successfully")
else:
logger.warning("OPENAI_API_KEY is not set. ChatGPT calls may fail.")
# Initialize Google Cloud Storage client
try:
storage_client = storage.Client()
logger.info("Google Cloud Storage client initialized successfully")
except Exception as e:
logger.error(f"Failed to initialize Google Cloud Storage client: {e}")
storage_client = None
# Special content tag patterns (regex)
SPECIAL_CONTENT_PATTERNS = {
"formula": r"\[Formula content start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Formula content end\]",
"figure": r"\[Figure content start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Figure content end\]",
"chart": r"\[Chart content start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Chart content end\]",
"chemical_structure": r"\[Chemical structure start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Chemical structure end\]",
"math_graph": r"\[Math graph start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Math graph end\]",
"table": r"\[Table content start\. ChatGPT should not delete this content\. This is important conversion content\.\](.*?)\[Table content end\]"
}
# Simplified tag format
SIMPLIFIED_TAGS = {
"formula": ("[FormulaStart]", "[FormulaEnd]"),
"figure": ("[FigureStart]", "[FigureEnd]"),
"chart": ("[ChartStart]", "[ChartEnd]"),
"chemical_structure": ("[ChemicalStructureStart]", "[ChemicalStructureEnd]"),
"math_graph": ("[MathGraphStart]", "[MathGraphEnd]"),
"table": ("[TableStart]", "[TableEnd]")
}
def parse_gcs_prefix(gcs_url: str) -> Tuple[str, str]:
"""
Separate bucket and prefix parts from gs://bucket/folder/... format
Args:
gcs_url: GCS URL (gs://bucket/folder/...)
Returns:
Tuple[str, str]: (bucket_name, prefix)
"""
no_scheme = gcs_url.replace("gs://", "")
parts = no_scheme.split("/", 1)
bucket_name = parts[0]
prefix = parts[1] if len(parts) > 1 else ""
return bucket_name, prefix
def load_json_from_gcs(gcs_url: str) -> Optional[Dict]:
"""
Download JSON file from GCS path and return as Python dict
Args:
gcs_url: GCS URL (gs://bucket/blob_path)
Returns:
Optional[Dict]: Loaded JSON data or None (on error)
"""
try:
if not gcs_url.startswith("gs://"):
logger.error(f"Invalid GCS URL format: {gcs_url}")
return None
bucket_name, blob_path = parse_gcs_prefix(gcs_url)
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_path)
if not blob.exists():
logger.error(f"Blob not found: {gcs_url}")
return None
data_str = blob.download_as_text(encoding="utf-8")
data = json.loads(data_str)
logger.info(f"JSON loaded successfully: {gcs_url}")
return data
except Exception as e:
logger.error(f"Error loading JSON from GCS: {e}")
return None
def save_json_to_gcs(data: Dict, gcs_path: str) -> Optional[str]:
"""
Serialize data to JSON and upload to GCS
Args:
data: Data to save (dict)
gcs_path: GCS path (excluding bucket, e.g., "biology/stage2/2010_1_B/page_1_stage2.json")
Returns:
Optional[str]: Saved GCS URL or None (on error)
"""
try:
bucket = storage_client.bucket(BUCKET_NAME)
if not bucket.exists():
bucket.create()
blob = bucket.blob(gcs_path)
json_data = json.dumps(data, ensure_ascii=False, indent=2)
blob.upload_from_string(json_data, content_type="application/json")
logger.info(f"JSON saved successfully: gs://{BUCKET_NAME}/{gcs_path}")
return f"gs://{BUCKET_NAME}/{gcs_path}"
except Exception as e:
logger.error(f"Error saving JSON to GCS: {e}")
return None
def check_folder_exists(folder_path: str) -> bool:
"""
Check if GCS folder exists
Args:
folder_path: GCS folder path (excluding bucket, e.g., "biology/stage2/")
Returns:
bool: Whether folder exists
"""
try:
bucket = storage_client.bucket(BUCKET_NAME)
# GCS doesn't actually have folder concept, so check if any blob with this prefix exists
blobs = list(bucket.list_blobs(prefix=folder_path, max_results=1))
return len(blobs) > 0
except Exception as e:
logger.error(f"Error checking if GCS folder exists: {e}")
return False
def simplify_special_content_tags(text: str) -> str:
"""
Simplify special content tags
Args:
text: Original text
Returns:
str: Text with simplified tags
"""
simplified_text = text
for content_type, pattern in SPECIAL_CONTENT_PATTERNS.items():
start_tag, end_tag = SIMPLIFIED_TAGS[content_type]
def replace_tags(match):
content = match.group(1).strip()
# Add line breaks between label and content, and between content and end label
return f"{start_tag}\n\n{content}\n\n{end_tag}"
simplified_text = re.sub(pattern, replace_tags, simplified_text, flags=re.DOTALL)
return simplified_text
def extract_special_content(text: str) -> Tuple[str, Dict[str, List[Dict[str, str]]]]:
"""
Extract special content (formulas, figures, tables, etc.) from text and replace with placeholders
Args:
text: Original text
Returns:
Tuple[str, Dict]: (Text with placeholders, special content information)
"""
placeholder_text = text
special_contents = {}
for content_type, pattern in SPECIAL_CONTENT_PATTERNS.items():
special_contents[content_type] = []
# Find special content
matches = list(re.finditer(pattern, text, re.DOTALL))
# Process from end to avoid index changes
for i, match in enumerate(reversed(matches)):
# Use clearer placeholder format (easier for ChatGPT to recognize)
placeholder_id = f"___SPECIAL_CONTENT_{content_type}_{len(matches) - i - 1}_DO_NOT_REMOVE_THIS_PLACEHOLDER___"
content = match.group(1).strip()
# Replace special content with placeholder in original text
start, end = match.span()
placeholder_text = placeholder_text[:start] + placeholder_id + placeholder_text[end:]
# Save special content information
special_contents[content_type].append({
"id": placeholder_id,
"content": content,
"original_tag": match.group(0)
})
return placeholder_text, special_contents
def restore_special_content(text: str, special_contents: Dict[str, List[Dict[str, str]]]) -> str:
"""
Restore placeholders to simplified special content tags
Args:
text: Text with placeholders
special_contents: Special content information
Returns:
str: Text with restored special content
"""
restored_text = text
# Process all special content types
for content_type, contents in special_contents.items():
start_tag, end_tag = SIMPLIFIED_TAGS[content_type]
for content_info in contents:
placeholder_id = content_info["id"]
content = content_info["content"]
# Replace placeholder with simplified tag
if placeholder_id in restored_text:
# Replace if placeholder exists
restored_text = restored_text.replace(
placeholder_id,
f"{start_tag}\n\n{content}\n\n{end_tag}"
)
else:
# Try to restore original position if placeholder was deleted
logger.warning(f"Placeholder '{placeholder_id}' was deleted in ChatGPT response. Preserving original tag.")
# Add special content to end of text
if not restored_text.endswith("\n"):
restored_text += "\n"
restored_text += f"\n{start_tag}\n\n{content}\n\n{end_tag}\n"
# Line break processing - ensure proper display in JSON output
# This part doesn't affect JSON storage so no modification needed here
return restored_text
def chatgpt_correct_text(original_text: str) -> Dict[str, Any]:
"""
Use ChatGPT to correct OCR text
Args:
original_text: Original OCR text
Returns:
Dict: Correction results (corrected_text, confidence, special_content_corrections)
"""
if not client:
logger.error("OpenAI client not initialized. Check OPENAI_API_KEY.")
return {"corrected_text": original_text, "confidence": 0.0, "special_content_corrections": {}}
if not original_text:
return {"corrected_text": "", "confidence": 0.0, "special_content_corrections": {}}
# First simplify special content tags
simplified_text = simplify_special_content_tags(original_text)
# Extract special content and replace with placeholders
placeholder_text, special_contents = extract_special_content(simplified_text)
# Log: Original text length
logger.info(f"Sending text to ChatGPT (length={len(placeholder_text)}).")
# System prompt - correction guidelines (enhanced version)
system_prompt = """You are an expert in accurately correcting Japanese OCR results. Please strictly follow these guidelines:
1. Identify and correct clear OCR errors based on context.
2. Mark text that is difficult to infer from context or where corrections might significantly alter content as [?text?].
3. Never change the original language of any text:
- Keep Korean text in Korean.
- Keep Japanese text in Japanese.
- Keep English text in English.
- Do not translate any language to another language.
4. Never modify or translate special area tags and content enclosed in brackets:
- Special area tag formats: "[XXStart]", "[XXEnd]" or placeholders starting with "___SPECIAL_CONTENT_..."
- These tags and placeholders contain important content that must be preserved exactly as is.
- Within special areas, only correct obvious typos without deleting or omitting any content.
5. Delete content that is completely unnecessary in context (e.g., duplicate text, page numbers).
6. Add empty lines between paragraphs to improve readability.
7. Improve alignment of Markdown format tables and charts for better readability.
8. Return only the corrected text without explanations or comments.
Important: Maintain the original language of all text, and never delete or translate special area tags and content enclosed in brackets! This information is essential for ML training!
"""
# User prompt - OCR text
user_prompt = f"""The following is a Japanese OCR result. Please correct errors according to the guidelines above:
-----------
{placeholder_text}
-----------
Return only the corrected text without additional explanations or comments.
Never change the original language of any text. Keep Korean in Korean, Japanese in Japanese, and English in English.
Never delete or translate special area tags and content enclosed in brackets! This information is essential for ML training!
Do not delete or omit any content, only correct obvious typos.
"""
try:
# Call ChatGPT
completion = client.chat.completions.create(
model="gpt-4o", # or "gpt-4" or "gpt-3.5-turbo"
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
temperature=0.2,
max_tokens=4096
)
# Extract corrected text
corrected_placeholder_text = completion.choices[0].message.content.strip()
# Restore special content
corrected_text = restore_special_content(corrected_placeholder_text, special_contents)
# Calculate similarity
sm = difflib.SequenceMatcher(None, original_text, corrected_text)
confidence = sm.ratio()
logger.info(f"ChatGPT response length={len(corrected_text)}, similarity={confidence:.3f}")
return {
"corrected_text": corrected_text,
"confidence": confidence,
"special_content_corrections": {} # Can add special content correction info in future
}
except Exception as e:
logger.error(f"ChatGPT error: {e}")
return {
"corrected_text": original_text,
"confidence": 0.0,
"special_content_corrections": {}
}
def chatgpt_correct_special_content(content_type: str, content: str) -> Dict[str, Any]:
"""
Use ChatGPT to correct special content (formulas, figures, tables, etc.)
Args:
content_type: Content type (formula, figure, table, etc.)
content: Original content
Returns:
Dict: Correction results (corrected_content, confidence)
"""
# Return original content without correction
logger.info(f"{content_type} content is kept as is without correction.")
return {"corrected_content": content, "confidence": 1.0}
def extract_page_number_from_filename(filename: str) -> Optional[int]:
"""
Extract page number from filename
Args:
filename: Filename (e.g., "page_7.json")
Returns:
Optional[int]: Extracted page number or None
"""
match = re.search(r'page_(\d+)\.json', filename)
if match:
return int(match.group(1))
return None
def process_page_stage2(page_data: Dict, original_blob_name: str, folder_name: str, subfolder: str) -> Dict[str, Any]:
"""
Correct page OCR results with ChatGPT and save
Args:
page_data: Page OCR result data
original_blob_name: Original blob name (e.g., "TOEFL/stage1/2010_1_B/page_7.json")
folder_name: Parent folder name (e.g., "TOEFL")
subfolder: Subfolder name (e.g., "2010_1_B")
Returns:
Dict: Processing results
"""
# Extract page number from original filename
filename = original_blob_name.split("/")[-1]
page_number = extract_page_number_from_filename(filename)
if page_number is None:
# If page number can't be extracted, get from page data or use default
page_number = page_data.get("page", 0)
logger.warning(f"Could not extract page number from filename {filename}. Using page data or default value {page_number}.")
# Extract original text - use text field already collected in stage1
original_text = page_data.get("text", "")
logger.info(f"Processing page {page_number} (folder: '{folder_name}', subfolder: '{subfolder}', original text length={len(original_text)})")
# Correct text
corrected = chatgpt_correct_text(original_text)
corrected_text = corrected["corrected_text"]
confidence = corrected["confidence"]
special_content_corrections = corrected.get("special_content_corrections", {})
# Construct result data - remove text_original field and change text_corrected to text
result_data = {
"page": page_number,
"text": corrected_text, # Save as text instead of text_corrected
"confidence": confidence,
"special_content_corrections": special_content_corrections,
"processing_date": datetime.now().isoformat(),
"stage": "stage2",
"original_blob_name": original_blob_name
}
# Save result - maintain original page number
page_filename = f"page_{page_number}_stage2.json"
gcs_path = f"{folder_name}/stage2/{subfolder}/{page_filename}"
output_url = save_json_to_gcs(result_data, gcs_path)
if output_url:
logger.info(f"Page {page_number} correction results saved: {output_url}")
return {
"page_number": page_number,
"gcs_url": output_url,
"confidence": confidence,
"original_blob_name": original_blob_name
}
def list_top_level_folders() -> List[str]:
"""
List top-level folders in GCS bucket (improved version)
Returns:
List[str]: List of top-level folders
"""
top_folders = set()
bucket = storage_client.bucket(BUCKET_NAME)
logger.info(f"Listing top-level folders in bucket '{BUCKET_NAME}'")
# List all blobs in bucket
blobs = list(bucket.list_blobs())
# Extract top-level folder from each blob path
for blob in blobs:
parts = blob.name.split('/')
if len(parts) > 0 and parts[0]: # Not empty string
top_folders.add(parts[0])
top_folders_list = list(top_folders)
logger.info(f"Top-level folders found: {top_folders_list}")
return top_folders_list
def check_stage1_exists(folder_name: str) -> bool:
"""
Check if stage1 folder exists in folder
Args:
folder_name: Folder name
Returns:
bool: Whether stage1 folder exists
"""
return check_folder_exists(f"{folder_name}/stage1/")
def check_stage2_exists(folder_name: str) -> bool:
"""
Check if stage2 folder exists in folder
Args:
folder_name: Folder name
Returns:
bool: Whether stage2 folder exists
"""
return check_folder_exists(f"{folder_name}/stage2/")
def list_stage1_subfolders(folder_name: str) -> List[str]:
"""
Extract list of subfolders under stage1 in folder
Args:
folder_name: Folder name
Returns:
List[str]: List of subfolders
"""
subfolders = set()
bucket = storage_client.bucket(BUCKET_NAME)
prefix = f"{folder_name}/stage1/"
logger.info(f"Listing subfolders under prefix '{prefix}'")
# List all blobs
blobs = list(bucket.list_blobs(prefix=prefix))
# Extract subfolder from each blob path
for blob in blobs:
parts = blob.name.split("/")
# Example: "TOEFL/stage1/2010_1_B/page_1.json" -> parts = ["TOEFL","stage1","2010_1_B","page_1.json"]
if len(parts) >= 3 and parts[2]: # Not empty string
subfolders.add(parts[2]) # "2010_1_B"
subfolders_list = list(subfolders)
logger.info(f"Subfolders found: {subfolders_list}")
return subfolders_list
def list_page_blobs(folder_name: str, subfolder: str) -> List[Any]:
"""
List page_n.json files in specific subfolder
Args:
folder_name: Folder name
subfolder: Subfolder name
Returns:
List[Any]: List of blobs
"""
folder_prefix = f"{folder_name}/stage1/{subfolder}/"
bucket = storage_client.bucket(BUCKET_NAME)
logger.info(f"Listing page blobs under subfolder '{subfolder}' (prefix='{folder_prefix}')")
# List all blobs
all_blobs = list(bucket.list_blobs(prefix=folder_prefix))
# Filter for page_n.json files
page_blobs = [
blob for blob in all_blobs
if blob.name.endswith(".json") and "summary_stage1" not in blob.name
]
# Sort by filename (maintain page order)
page_blobs.sort(key=lambda b: b.name)
logger.info(f"Found {len(page_blobs)} page blobs in subfolder '{subfolder}'")
return page_blobs
def process_folder(folder_name: str) -> Dict[str, Any]:
"""
Process stage1 data in folder to create stage2
Args:
folder_name: Folder name
Returns:
Dict: Processing results
"""
results = {}
# Check if stage1 folder exists
if not check_stage1_exists(folder_name):
logger.warning(f"No stage1 folder in folder '{folder_name}'. Skipping.")
return results
# Check if stage2 folder exists (skip if already exists)
if check_stage2_exists(folder_name):
logger.warning(f"Folder '{folder_name}' already has stage2 folder. Skipping.")
return results
# List stage1 subfolders
subfolders = list_stage1_subfolders(folder_name)
if not subfolders:
logger.error(f"Could not find subfolders under stage1 in folder '{folder_name}'.")
return results
for subfolder in subfolders:
logger.info(f"[Stage2] Folder: {folder_name}, Processing subfolder: {subfolder}")
page_blobs = list_page_blobs(folder_name, subfolder)
stage2_pages = []
for blob in page_blobs:
logger.info(f" - Loading {blob.name}")
try:
page_json = json.loads(blob.download_as_text(encoding="utf-8"))
except Exception as e:
logger.error(f"Error loading blob {blob.name}: {e}")
continue
# Pass original blob name to maintain page number
page_result = process_page_stage2(page_json, blob.name, folder_name, subfolder)
if page_result and page_result.get("gcs_url"):
stage2_pages.append(page_result)
# Sort by page number
stage2_pages.sort(key=lambda p: p["page_number"])
# Create summary_stage2.json for each subfolder
summary = {
"folder": folder_name,
"subfolder": subfolder,
"processing_date": datetime.now().isoformat(),
"stage": "stage2",
"pages": stage2_pages
}
summary_path = f"{folder_name}/stage2/{subfolder}/summary_stage2.json"
summary_url = save_json_to_gcs(summary, summary_path)
results[subfolder] = {
"summary_url": summary_url,
"pages": stage2_pages
}
logger.info(f"Folder: {folder_name}, Subfolder {subfolder} processing complete: {summary_url} (total pages={len(stage2_pages)})")
return results
def process_all_folders() -> Dict[str, Dict[str, Any]]:
"""
Process all top-level folders in GCS bucket
Returns:
Dict: Processing results
"""
all_results = {}
# List all top-level folders
top_folders = list_top_level_folders()
if not top_folders:
logger.error(f"Could not find folders in bucket '{BUCKET_NAME}'.")
return all_results
for folder_name in top_folders:
logger.info(f"Starting processing folder '{folder_name}'")
# Process folder
folder_results = process_folder(folder_name)
if folder_results:
all_results[folder_name] = folder_results
logger.info(f"Folder '{folder_name}' processing complete")
else:
logger.info(f"No results for folder '{folder_name}' (no stage1 or stage2 already exists)")
return all_results
def main():
"""
Main function
"""
global BUCKET_NAME
parser = argparse.ArgumentParser(description="OCR System - Stage2 (ChatGPT Correction)")
parser.add_argument("--bucket", type=str, default=BUCKET_NAME,
help=f"GCS bucket name (default: {BUCKET_NAME})")
parser.add_argument("--folder", type=str, default=None,
help="Process specific folder only (processes all folders if not specified)")
# Use parse_known_args() to ignore unknown arguments
args, unknown = parser.parse_known_args()
# Modify global variable BUCKET_NAME
BUCKET_NAME = args.bucket
logger.info(f"Starting OCR Stage2 - Bucket: {BUCKET_NAME}")
if args.folder:
# Process specific folder only
logger.info(f"Starting processing folder '{args.folder}'")
results = process_folder(args.folder)
if results:
logger.info(f"Folder '{args.folder}' processing complete. The following subfolders were processed:")
for subfolder, info in results.items():
logger.info(f" {subfolder}: summary -> {info['summary_url']}")
else:
logger.info(f"No results for folder '{args.folder}' (no stage1 or stage2 already exists)")
else:
# Process all folders
logger.info("Starting processing all folders")
all_results = process_all_folders()
if all_results:
logger.info("All folders processing complete. The following folders were processed:")
for folder, results in all_results.items():
logger.info(f"Folder '{folder}':")
for subfolder, info in results.items():
logger.info(f" {subfolder}: summary -> {info['summary_url']}")
else:
logger.info("No folders were processed.")
if __name__ == "__main__":
main()
# To customize output language, modify the system_prompt and user_prompt strings in the
# chatgpt_correct_text() function, and update the SPECIAL_CONTENT_PATTERNS and SIMPLIFIED_TAGS
# dictionaries to match your desired language.
================================================
FILE: v2.0_initial/Dockerfile
================================================
###############################################################################
# Dockerfile for GPU-based Python environment with DocLayout-YOLO (HEAD)
# - CUDA 11.8 + cuDNN 8 + Ubuntu 20.04
# - Python 3.9 (via deadsnakes)
# - Timezone: Asia/Seoul (can be changed)
# - NumPy <2.0 (1.24.3)
# - Patched DocLayout-YOLO (latest HEAD) to remove 'init_subclass' keyword argument
###############################################################################
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
# NVIDIA settings
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Seoul
# 1) Install Packages
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y \
software-properties-common \
wget \
git \
build-essential \
poppler-utils \
libgl1-mesa-glx \
libglib2.0-0 \
tzdata \
python3.9 \
python3.9-distutils \
python3.9-dev && \
ln -fs /usr/share/zoneinfo/Asia/Seoul /etc/localtime && \
echo "Asia/Seoul" > /etc/timezone && \
dpkg-reconfigure --frontend noninteractive tzdata && \
rm -rf /var/lib/apt/lists/*
# 2) Install pip
RUN wget https://bootstrap.pypa.io/get-pip.py -O /tmp/get-pip.py && \
python3.9 /tmp/get-pip.py && \
rm /tmp/get-pip.py
# 3) Create symbolic links for python3 and pip
RUN ln -sf /usr/bin/python3.9 /usr/local/bin/python && \
ln -sf /usr/local/bin/pip /usr/local/bin/pip3
# 4) Set working directory
WORKDIR /app
# 5) Upgrade pip, setuptools, and wheel
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
# 6) Install PyTorch & TorchVision (e.g., 2.0.1 + cu118)
RUN pip install --no-cache-dir \
torch==2.0.1 \
torchvision==0.15.2 \
--index-url https://download.pytorch.org/whl/cu118
# 7) Install NumPy and other Python dependencies
RUN pip install --no-cache-dir \
numpy==1.26.4 \
Pillow==9.4.0 \
opencv-python==4.7.0.72 \
pdf2image==1.16.3 \
requests==2.31.0 \
huggingface_hub==0.19.4 \
google-cloud-storage==2.9.0 \
google-cloud-vision==3.4.0 \
PyYAML==6.0.1 \
ultralytics==8.0.196 \
protobuf==3.20.3
RUN pip install google-genai
# 8) Clone the latest HEAD version of DocLayout-YOLO
RUN git clone https://github.com/opendatalab/DocLayout-YOLO.git /app/doclayout-yolo
WORKDIR /app/doclayout-yolo
RUN git checkout main
RUN pip install --no-cache-dir -e .
# 9) Patch: Remove 'init_subclass' keyword argument from YOLOv10
RUN sed -i \
's/class YOLOv10(Model, PyTorchModelHubMixin, repo_url=.*$/class YOLOv10(Model, PyTorchModelHubMixin):/' \
/app/doclayout-yolo/doclayout_yolo/models/yolov10/model.py
# 10) Switch back to /app directory
WORKDIR /app
# 11) Copy custom_doclayout_yolo.py and advanced_ocr.py
COPY custom_doclayout_yolo.py /app/custom_doclayout_yolo.py
COPY advanced_ocr.py /app/advanced_ocr.py
# 12) Define mountable volumes
VOLUME ["/app/input", "/app/output", "/app/credentials"]
# 13) Set environment variables
ENV PYTHONUNBUFFERED=1
ENV GOOGLE_APPLICATION_CREDENTIALS=/app/credentials/YOUR_Google_Vision_S.Account.json
ENV PDF_FOLDER=/app/input
ENV OUTPUT_FOLDER=/app/output
ENV GCS_BUCKET_NAME=YOUR_GCS_BUCKET_NAME
ENV MATHPIX_APP_ID="YOUR_MATHPIX_APP_ID"
ENV MATHPIX_APP_KEY="YOUR_MATHPIX_APP_KEY"
ENV PYTHONPATH=/app:/app/doclayout-yolo
# 14) CMD: Run advanced_ocr.py with --input /app/input to process all PDFs in that directory
CMD ["python", "/app/advanced_ocr.py", "--input", "/app/input"]
================================================
FILE: v2.0_initial/advanced_ocr.py
================================================
import os
import cv2
import numpy as np
import json
import time
import hashlib
import base64
import requests
import io
import tempfile
import gc
from datetime import datetime
from google.cloud import storage
from google import genai
from google.genai import types
from PIL import Image
class AdvancedOCR:
def __init__(self, model_path=None, confidence_threshold=0.5, use_cache=True, cache_dir='cache'):
"""
Initialize advanced OCR processing class
Args:
model_path (str): DocLayout-YOLO model path
confidence_threshold (float): Detection confidence threshold
use_cache (bool): Whether to use caching
cache_dir (str): Cache directory path
"""
self.model_path = model_path
self.confidence_threshold = confidence_threshold
self.use_cache = use_cache
self.cache_dir = cache_dir
# Create cache directory
if self.use_cache and not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir)
# Load DocLayout-YOLO model
try:
from custom_doclayout_yolo import DocLayoutYOLO
self.doc_layout_model = DocLayoutYOLO(model_path=self.model_path)
print("DocLayout-YOLO model loaded successfully")
except Exception as e:
print(f"Failed to load DocLayout-YOLO model: {e}")
self.doc_layout_model = None
# Set up Gemini API
self._setup_gemini_api()
# Initialize Google Cloud Storage client
self._setup_gcs_client()
def _setup_gemini_api(self):
"""Set up Gemini API"""
# Get API key from environment variable
api_key = os.environ.get("GEMINI_API_KEY", "")
if api_key:
# Initialize latest Gemini API client
self.gemini_client = genai.Client(api_key=api_key)
print("Gemini API client initialized successfully")
else:
self.gemini_client = None
print("Warning: GEMINI_API_KEY environment variable not set")
def _setup_gcs_client(self):
"""Initialize Google Cloud Storage client"""
try:
# Get service account info from environment variable
SERVICE_ACCOUNT_JSON = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
self.BUCKET_NAME = os.environ.get("GCS_BUCKET_NAME", "YOUR_GCS_BUCKET_NAME")
if SERVICE_ACCOUNT_JSON:
from google.oauth2.service_account import Credentials
creds = Credentials.from_service_account_file(SERVICE_ACCOUNT_JSON)
self.storage_client = storage.Client(credentials=creds, project=creds.project_id)
print("Google Cloud Storage client initialized successfully")
else:
self.storage_client = None
print("Warning: GOOGLE_APPLICATION_CREDENTIALS environment variable not set")
except Exception as e:
self.storage_client = None
print(f"Failed to initialize Google Cloud Storage client: {e}")
def _calculate_image_hash(self, image):
"""
Calculate image hash
Args:
image (numpy.ndarray): Image to calculate hash for
Returns:
str: Image hash string
"""
# Resize image to reduce memory usage
small_img = cv2.resize(image, (32, 32))
# Convert image to bytes with compression
_, buffer = cv2.imencode('.jpg', small_img, [cv2.IMWRITE_JPEG_QUALITY, 50])
# Calculate hash
image_hash = hashlib.md5(buffer).hexdigest()
# Release memory immediately
del small_img, buffer
return image_hash
def _get_cached_result(self, image_hash, cache_type):
"""
Get cached result
Args:
image_hash (str): Image hash
cache_type (str): Cache type (e.g., 'ocr', 'layout')
Returns:
dict or None: Cached result or None (cache miss)
"""
if not self.use_cache:
return None
cache_file = os.path.join(self.cache_dir, f"{cache_type}_{image_hash}.json")
if os.path.exists(cache_file):
try:
with open(cache_file, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
print(f"Error loading cache file: {e}")
return None
def _save_to_cache(self, image_hash, cache_type, result):
"""
Save result to cache
Args:
image_hash (str): Image hash
cache_type (str): Cache type (e.g., 'ocr', 'layout')
result (dict): Result to save
"""
if not self.use_cache:
return
cache_file = os.path.join(self.cache_dir, f"{cache_type}_{image_hash}.json")
try:
with open(cache_file, 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=2)
except Exception as e:
print(f"Error saving to cache: {e}")
def _detect_with_doclayout_yolo(self, image_np):
"""
Detect document layout using DocLayout-YOLO
Args:
image_np (numpy.ndarray): Input image
Returns:
list: List of detected regions
"""
# Calculate image hash
image_hash = self._calculate_image_hash(image_np)
# Check cache
cached_result = self._get_cached_result(image_hash, 'layout')
if cached_result is not None:
return cached_result
# Return empty result if DocLayout-YOLO model is not initialized
if self.doc_layout_model is None:
print("DocLayout-YOLO model not initialized")
return []
# Detect with DocLayout-YOLO
try:
# Save image to temporary file with compression
with tempfile.NamedTemporaryFile(suffix=".jpg", delete=False) as temp_file:
temp_path = temp_file.name
cv2.imwrite(temp_path, image_np, [cv2.IMWRITE_JPEG_QUALITY, 85])
# Use predict method
results = self.doc_layout_model.predict(temp_path, conf=0.25)
# Filter and format results
regions = []
if results and len(results) > 0:
result = results[0]
if hasattr(result, 'boxes') and result.boxes is not None:
boxes = result.boxes.xyxy.cpu().numpy()
classes = result.boxes.cls.cpu().numpy()
confs = result.boxes.conf.cpu().numpy()
class_names = result.names
for i, (box, cls_id, conf) in enumerate(zip(boxes, classes, confs)):
x1, y1, x2, y2 = map(int, box)
cls_name = class_names[int(cls_id)]
if conf >= self.confidence_threshold:
regions.append({
'type': cls_name,
'coords': [int(x1), int(y1), int(x2-x1), int(y2-y1)],
'confidence': float(conf)
})
# Delete temporary file
os.unlink(temp_path)
# Merge overlapping regions
regions = self._merge_overlapping_regions(regions)
# Save to cache
self._save_to_cache(image_hash, 'layout', regions)
# Memory cleanup
del results
gc.collect()
return regions
except Exception as e:
print(f"DocLayout-YOLO detection error: {e}")
return []
def _merge_overlapping_regions(self, regions):
"""
Merge duplicate or overlapping regions
Args:
regions (list): List of regions to merge
Returns:
list: List of merged regions
"""
if len(regions) <= 1:
return regions
# Function to calculate IoU
def calculate_iou(box1, box2):
# Extract box coordinates
x1, y1, w1, h1 = box1['coords']
x2, y2, w2, h2 = box2['coords']
# Calculate box endpoints
x1_end, y1_end = x1 + w1, y1 + h1
x2_end, y2_end = x2 + w2, y2 + h2
# Calculate intersection area
x_inter = max(0, min(x1_end, x2_end) - max(x1, x2))
y_inter = max(0, min(y1_end, y2_end) - max(y1, y2))
area_inter = x_inter * y_inter
# Calculate union area
area1 = w1 * h1
area2 = w2 * h2
area_union = area1 + area2 - area_inter
# Calculate IoU
if area_union == 0:
return 0
return area_inter / area_union
# Mark regions to keep
to_keep = [True] * len(regions)
# Check for duplicate regions
for i in range(len(regions)):
if not to_keep[i]:
continue
for j in range(i+1, len(regions)):
if not to_keep[j]:
continue
# Consider as duplicate if same class and IoU above threshold
if regions[i]['type'] == regions[j]['type'] and calculate_iou(regions[i], regions[j]) > 0.5:
# Remove the one with lower confidence
if regions[i]['confidence'] < regions[j]['confidence']:
to_keep[i] = False
break
else:
to_keep[j] = False
# Return only non-duplicate regions
filtered_regions = []
for i in range(len(regions)):
if to_keep[i]:
filtered_regions.append(regions[i])
return filtered_regions
def _detect_regions(self, image_np):
"""
Detect special regions in image
Args:
image_np (numpy.ndarray): Input image
Returns:
list: List of detected regions
"""
# Detect regions with DocLayout-YOLO
regions = self._detect_with_doclayout_yolo(image_np)
# If no regions, treat entire image as text region
if not regions:
height, width = image_np.shape[:2]
regions = [{
'type': 'text',
'coords': [0, 0, width, height],
'confidence': 1.0
}]
# Sort regions by Y coordinate
regions.sort(key=lambda r: r['coords'][1])
return regions
def _crop_region(self, image, region):
"""
Extract region from image
Args:
image (numpy.ndarray): Original image
region (dict): Region information
Returns:
numpy.ndarray: Extracted region image
"""
x, y, w, h = region['coords']
# Adjust coordinates if they exceed image boundaries
x = max(0, x)
y = max(0, y)
w = min(w, image.shape[1] - x)
h = min(h, image.shape[0] - y)
# Use .copy() to create a new memory allocation
return image[y:y+h, x:x+w].copy()
def _optimize_image_for_api(self, image):
"""
Optimize image for API calls
Args:
image (numpy.ndarray): Original image
Returns:
bytes: Optimized image bytes
"""
# Check image size and resize if necessary
h, w = image.shape[:2]
max_dim = 1600 # Maximum dimension limit
if max(h, w) > max_dim:
scale = max_dim / max(h, w)
new_w = int(w * scale)
new_h = int(h * scale)
image = cv2.resize(image, (new_w, new_h))
# Compress image
_, buffer = cv2.imencode('.jpg', image, [cv2.IMWRITE_JPEG_QUALITY, 85])
return buffer.tobytes()
def _process_text_region(self, region_img, region_info):
"""
Process text region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'text_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Call Google Vision OCR API
try:
# Optimize image for API
image_bytes = self._optimize_image_for_api(region_img)
# API call (using service account credentials)
from google.cloud import vision
from google.oauth2.service_account import Credentials
SERVICE_ACCOUNT_JSON = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
if SERVICE_ACCOUNT_JSON:
creds = Credentials.from_service_account_file(SERVICE_ACCOUNT_JSON)
vision_client = vision.ImageAnnotatorClient(credentials=creds)
image = vision.Image(content=image_bytes)
context = vision.ImageContext(language_hints=['ja', 'en', 'ko'])
response = vision_client.text_detection(image=image, image_context=context)
text = ''
if response.text_annotations:
text = response.text_annotations[0].description
processed_result = {
'type': 'text',
'coords': region_info['coords'],
'text': text
}
# Save to cache
self._save_to_cache(image_hash, 'text_ocr', processed_result)
# Memory cleanup
del image_bytes, response
gc.collect()
return processed_result
else:
# API key method (alternative)
# Encode image as base64
encoded_image = base64.b64encode(image_bytes).decode('utf-8')
# Prepare API request data
request_data = {
'requests': [
{
'image': {
'content': encoded_image
},
'features': [
{
'type': 'TEXT_DETECTION'
}
],
'imageContext': {
'languageHints': ['ja', 'en', 'ko']
}
}
]
}
response = requests.post(
'https://vision.googleapis.com/v1/images:annotate',
params={'key': os.environ.get('GOOGLE_VISION_API_KEY', '')},
json=request_data
)
# Process response
if response.status_code == 200:
result = response.json()
text = ''
# Extract text
if 'responses' in result and result['responses'] and 'fullTextAnnotation' in result['responses'][0]:
text = result['responses'][0]['fullTextAnnotation']['text']
processed_result = {
'type': 'text',
'coords': region_info['coords'],
'text': text
}
# Save to cache
self._save_to_cache(image_hash, 'text_ocr', processed_result)
# Memory cleanup
del image_bytes, encoded_image, result, response
gc.collect()
return processed_result
else:
print(f"Google Vision API error: {response.status_code} {response.text}")
except Exception as e:
print(f"Text region processing error: {e}")
# Return empty result on error
return {
'type': 'text',
'coords': region_info['coords'],
'text': ''
}
def _process_table_region(self, region_img, region_info):
"""
Process table region (using Gemini API)
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'table_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Process table with Gemini API
try:
# Process as text region if Gemini client is not initialized
if self.gemini_client is None:
print("Gemini API client not initialized. Processing as text region.")
return self._process_text_region(region_img, region_info)
# Optimize image (resize and compress)
h, w = region_img.shape[:2]
max_dim = 1024 # Recommended max size for Gemini API
if max(h, w) > max_dim:
scale = max_dim / max(h, w)
new_w = int(w * scale)
new_h = int(h * scale)
region_img_resized = cv2.resize(region_img, (new_w, new_h))
else:
region_img_resized = region_img
# Convert image to PIL format
pil_image = Image.fromarray(cv2.cvtColor(region_img_resized, cv2.COLOR_BGR2RGB))
# Create prompt
prompt = """
Analyze this table and respond in the following format:
1. Accurately reproduce the table structure in markdown format. Clearly distinguish each column and row, and use line breaks appropriately to make the table structure visually clear.
2. Provide a brief summary of the table content.
3. Explain the educational significance and importance of this table.
4. List related learning topics.
Provide your response in the following JSON format:
{
"markdown_table": "| Column1 | Column2 | Column3 |\n|-----|-----|-----|\n| Row1Col1 | Row1Col2 | Row1Col3 |\n| Row2Col1 | Row2Col2 | Row2Col3 |",
"summary": "Table content summary",
"educational_value": "Educational significance and importance",
"related_topics": ["Related topic 1", "Related topic 2", ...]
}
Return only the JSON format without any other text. In particular, include line breaks (\\n) in the markdown_table field using actual markdown table format.
"""
# API call (latest method)
print("Calling Gemini API - processing table region")
# Convert image to bytes (memory-efficient method)
img_byte_arr = io.BytesIO()
pil_image.save(img_byte_arr, format='JPEG', quality=85, optimize=True)
img_bytes = img_byte_arr.getvalue()
contents = [
types.Content(
role="user",
parts=[
types.Part.from_text(text=prompt),
types.Part.from_bytes(data=img_bytes, mime_type="image/jpeg")
],
),
]
generate_content_config = types.GenerateContentConfig(
response_mime_type="text/plain",
)
response = self.gemini_client.models.generate_content(
model="gemini-2.0-flash",
contents=contents,
config=generate_content_config,
)
# Memory cleanup
del pil_image, img_byte_arr, img_bytes, region_img_resized
gc.collect()
# Log response
print(f"Gemini API response type: {type(response)}")
# Process response (improved method)
gemini_result = {}
try:
# Get response text
response_text = response.text
print(f"Gemini API response text: {response_text[:100]}...")
# Try to parse JSON
try:
# Extract JSON part using regex
import re
json_match = re.search(r'(\{.*\})', response_text, re.DOTALL)
if json_match:
json_str = json_match.group(1)
gemini_result = json.loads(json_str)
else:
# Construct directly if not in JSON format
gemini_result = {
"markdown_table": "",
"summary": response_text,
"educational_value": "",
"related_topics": []
}
except Exception as json_error:
print(f"JSON parsing error: {json_error}")
gemini_result = {
"markdown_table": "",
"summary": response_text,
"educational_value": "",
"related_topics": []
}
except Exception as resp_error:
print(f"Response processing error: {resp_error}")
gemini_result = {
"markdown_table": "",
"summary": "Error occurred during response processing",
"educational_value": "",
"related_topics": []
}
# Construct result
markdown_table = gemini_result.get("markdown_table", "")
summary = gemini_result.get("summary", "")
educational_value = gemini_result.get("educational_value", "")
related_topics = gemini_result.get("related_topics", [])
# Construct final text (memory-efficient method)
final_text_parts = [
"[Table content start. ChatGPT should not delete this content. This is important conversion content.]",
"",
"## Table Structure:",
markdown_table,
"",
"## Summary:",
summary,
"",
"## Educational Significance:",
educational_value,
"",
"## Related Topics:",
', '.join(related_topics),
"",
"[Table content end]"
]
final_text = "\n".join(final_text_parts)
processed_result = {
'type': 'table',
'coords': region_info['coords'],
'markdown_table': markdown_table,
'summary': summary,
'educational_value': educational_value,
'related_topics': related_topics,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'table_ocr', processed_result)
# Memory cleanup
del response, response_text, gemini_result, final_text_parts
gc.collect()
return processed_result
except Exception as e:
print(f"Table region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_figure_region(self, region_img, region_info):
"""
Process figure region (using Gemini API)
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'figure_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Process figure with Gemini API
try:
# Process as text region if Gemini client is not initialized
if self.gemini_client is None:
print("Gemini API client not initialized. Processing as text region.")
return self._process_text_region(region_img, region_info)
# Optimize image (resize and compress)
h, w = region_img.shape[:2]
max_dim = 1024 # Recommended max size for Gemini API
if max(h, w) > max_dim:
scale = max_dim / max(h, w)
new_w = int(w * scale)
new_h = int(h * scale)
region_img_resized = cv2.resize(region_img, (new_w, new_h))
else:
region_img_resized = region_img
# Convert image to PIL format
pil_image = Image.fromarray(cv2.cvtColor(region_img_resized, cv2.COLOR_BGR2RGB))
# Create prompt
prompt = """
Analyze this image and respond in the following format:
1. Describe in detail what is included in the image. Divide into paragraphs for better readability.
2. Explain the educational significance and importance of this image.
3. List related learning topics.
4. Explain how this image could be used in exam questions.
Provide your response in the following JSON format:
{
"description": "Image description (write in multiple paragraphs for better readability)",
"educational_value": "Educational significance and importance",
"related_topics": ["Related topic 1", "Related topic 2", ...],
"exam_relevance": "Exam relevance"
}
Return only the JSON format without any other text. Write the description in multiple paragraphs for better readability.
"""
# API call (latest method)
print("Calling Gemini API - processing figure region")
# Convert image to bytes (memory-efficient method)
img_byte_arr = io.BytesIO()
pil_image.save(img_byte_arr, format='JPEG', quality=85, optimize=True)
img_bytes = img_byte_arr.getvalue()
contents = [
types.Content(
role="user",
parts=[
types.Part.from_text(text=prompt),
types.Part.from_bytes(data=img_bytes, mime_type="image/jpeg")
],
),
]
generate_content_config = types.GenerateContentConfig(
response_mime_type="text/plain",
)
response = self.gemini_client.models.generate_content(
model="gemini-2.0-flash",
contents=contents,
config=generate_content_config,
)
# Memory cleanup
del pil_image, img_byte_arr, img_bytes, region_img_resized
gc.collect()
# Log response
print(f"Gemini API response type: {type(response)}")
# Process response (improved method)
gemini_result = {}
try:
# Get response text
response_text = response.text
print(f"Gemini API response text: {response_text[:100]}...")
# Try to parse JSON
try:
# Extract JSON part using regex
import re
json_match = re.search(r'(\{.*\})', response_text, re.DOTALL)
if json_match:
json_str = json_match.group(1)
gemini_result = json.loads(json_str)
else:
# Construct directly if not in JSON format
gemini_result = {
"description": response_text,
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
except Exception as json_error:
print(f"JSON parsing error: {json_error}")
gemini_result = {
"description": response_text,
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
except Exception as resp_error:
print(f"Response processing error: {resp_error}")
gemini_result = {
"description": "Error occurred during response processing",
"educational_value": "",
"related_topics": [],
"exam_relevance": ""
}
# Construct result
description = gemini_result.get("description", "")
educational_value = gemini_result.get("educational_value", "")
related_topics = gemini_result.get("related_topics", [])
exam_relevance = gemini_result.get("exam_relevance", "")
# Construct final text (memory-efficient method)
final_text_parts = [
"[Figure content start. ChatGPT should not delete this content. This is important conversion content.]",
"",
"## Image Description:",
description,
"",
"## Educational Significance:",
educational_value,
"",
"## Related Topics:",
', '.join(related_topics),
"",
"## Exam Relevance:",
exam_relevance,
"",
"[Figure content end]"
]
final_text = "\n".join(final_text_parts)
processed_result = {
'type': 'figure',
'coords': region_info['coords'],
'description': description,
'educational_value': educational_value,
'related_topics': related_topics,
'exam_relevance': exam_relevance,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'figure_ocr', processed_result)
# Memory cleanup
del response, response_text, gemini_result, final_text_parts
gc.collect()
return processed_result
except Exception as e:
print(f"Figure region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_formula_region(self, region_img, region_info):
"""
Process formula region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Calculate image hash
image_hash = self._calculate_image_hash(region_img)
# Check cache
cached_result = self._get_cached_result(image_hash, 'formula_ocr')
if cached_result is not None:
cached_result['coords'] = region_info['coords']
return cached_result
# Call MathPix API
try:
# Optimize image (resize and compress)
h, w = region_img.shape[:2]
max_dim = 1024 # Appropriate size limit
if max(h, w) > max_dim:
scale = max_dim / max(h, w)
new_w = int(w * scale)
new_h = int(h * scale)
region_img_resized = cv2.resize(region_img, (new_w, new_h))
else:
region_img_resized = region_img
# Encode image as base64
_, buffer = cv2.imencode('.jpg', region_img_resized, [cv2.IMWRITE_JPEG_QUALITY, 85])
encoded_image = base64.b64encode(buffer).decode('utf-8')
# Memory cleanup
del region_img_resized, buffer
gc.collect()
# Prepare API request data
request_data = {
'src': f'data:image/jpeg;base64,{encoded_image}',
'formats': ['text', 'latex'],
'data_options': {
'include_asciimath': True,
'include_latex': True
}
}
# API call
response = requests.post(
'https://api.mathpix.com/v3/text',
headers={
'app_id': os.environ.get('MATHPIX_APP_ID', ''),
'app_key': os.environ.get('MATHPIX_APP_KEY', ''),
'Content-Type': 'application/json'
},
json=request_data
)
# Memory cleanup
del encoded_image, request_data
gc.collect()
# Process response
if response.status_code == 200:
result = response.json()
# Extract formula
latex = result.get('latex', '')
text = result.get('text', '')
# Construct final text
final_text = f"[Formula content start. ChatGPT should not delete this content. This is important conversion content.]\n\nLaTeX: {latex}\n\nText: {text}\n\n[Formula content end]"
processed_result = {
'type': 'formula',
'coords': region_info['coords'],
'latex': latex,
'text': final_text
}
# Save to cache
self._save_to_cache(image_hash, 'formula_ocr', processed_result)
# Memory cleanup
del response, result
gc.collect()
return processed_result
else:
print(f"MathPix API error: {response.status_code} {response.text}")
except Exception as e:
print(f"Formula region processing error: {e}")
# Fall back to Google Vision OCR on error
return self._process_text_region(region_img, region_info)
def _process_title_region(self, region_img, region_info):
"""
Process title region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Process same as text region
result = self._process_text_region(region_img, region_info)
result['type'] = 'title'
return result
def _process_list_region(self, region_img, region_info):
"""
Process list region
Args:
region_img (numpy.ndarray): Region image
region_info (dict): Region information
Returns:
dict: Processed region information
"""
# Process same as text region
result = self._process_text_region(region_img, region_info)
result['type'] = 'list'
return result
def _process_regions(self, image_np, regions):
"""
Process detected regions
Args:
image_np (numpy.ndarray): Original image
regions (list): List of detected regions
Returns:
list: List of processed regions
"""
processed_regions = []
# Set to store coordinates of already processed regions
processed_coords = set()
for region in regions:
# Convert region coordinates to string for duplicate checking
region_key = f"{region['coords'][0]}_{region['coords'][1]}_{region['coords'][2]}_{region['coords'][3]}"
# Skip if already processed
if region_key in processed_coords:
continue
# Extract region image
region_img = self._crop_region(image_np, region)
# Process based on region type
if region['type'] == 'text':
processed_region = self._process_text_region(region_img, region)
elif region['type'] == 'title':
processed_region = self._process_title_region(region_img, region)
elif region['type'] == 'list':
processed_region = self._process_list_region(region_img, region)
elif region['type'] == 'table':
processed_region = self._process_table_region(region_img, region)
elif region['type'] == 'figure':
processed_region = self._process_figure_region(region_img, region)
elif region['type'] == 'formula':
processed_region = self._process_formula_region(region_img, region)
else:
# Process unknown types as text
processed_region = self._process_text_region(region_img, region)
# Add processed region
processed_regions.append(processed_region)
# Store processed region coordinates
processed_coords.add(region_key)
# Memory cleanup
del region_img
gc.collect()
# Sort by Y coordinate
processed_regions.sort(key=lambda r: r['coords'][1])
return processed_regions
def _combine_processed_regions(self, processed_regions):
"""
Combine processed regions to generate final text
Args:
processed_regions (list): List of processed regions
Returns:
str: Combined text
"""
# Memory-efficient string building
text_parts = []
for region in processed_regions:
if 'text' in region and region['text']:
text_parts.append(region['text'])
text_parts.append("\n\n")
return ''.join(text_parts).strip()
def _upload_to_gcs(self, data, gcs_path):
"""
Upload results to GCS
Args:
data (dict): Data to upload
gcs_path (str): GCS path
Returns:
bool: Upload success status
"""
if not self.storage_client:
print(f"GCS client not initialized, skipping upload: {gcs_path}")
return False
try:
bucket = self.storage_client.bucket(self.BUCKET_NAME)
blob = bucket.blob(gcs_path)
# Serialize JSON data (memory-efficient method)
json_data = json.dumps(data, ensure_ascii=False, indent=2)
# Upload
blob.upload_from_string(json_data, content_type="application/json")
print(f"GCS upload complete: gs://{self.BUCKET_NAME}/{gcs_path}")
# Memory cleanup
del json_data
gc.collect()
return True
except Exception as e:
print(f"GCS upload error: {e}")
return False
def process_image(self, image_path):
"""
Main image processing function
Args:
image_path (str): Path to image to process
Returns:
dict: Processing results
"""
start_time = time.time()
# Load image
image_np = cv2.imread(image_path)
if image_np is None:
return {'error': f"Cannot load image: {image_path}"}
# Get image dimensions
height, width = image_np.shape[:2]
# Detect regions
regions = self._detect_regions(image_np)
# Process regions
processed_regions = self._process_regions(image_np, regions)
# Combine text
text = self._combine_processed_regions(processed_regions)
# Calculate processing time
processed_time = time.time() - start_time
# Return results
result = {
'width': width,
'height': height,
'regions': regions,
'processed_regions': processed_regions,
'text': text,
'region_positions': [region['coords'] for region in processed_regions],
'processed_time': datetime.now().isoformat()
}
# Memory cleanup
del image_np
gc.collect()
return result
def process_pdf(self, pdf_path, output_folder=None):
"""
Process PDF file
Args:
pdf_path (str): PDF file path
output_folder (str): Output folder path
Returns:
dict: Processing results summary
"""
try:
from pdf2image import convert_from_path, pdfinfo_from_path
# Extract PDF filename
pdf_file = os.path.basename(pdf_path)
# Extract subject name (from filename or use default)
subject = pdf_file.replace(".pdf", "").split("_")[-1] if "_" in pdf_file else "Unknown"
print(f"Starting PDF processing: {pdf_file}, Subject: {subject}")
# Read PDF info
pdf_info = pdfinfo_from_path(pdf_path)
num_pages = pdf_info["Pages"]
print(f"PDF page count: {num_pages}")
# Set output folder
if output_folder is None:
output_folder = os.path.join(os.path.dirname(pdf_path), "output")
# Create output folder
os.makedirs(output_folder, exist_ok=True)
# Create subject folder
subject_folder = os.path.join(output_folder, subject)
os.makedirs(subject_folder, exist_ok=True)
# Create PDF name folder
pdf_name = pdf_file.replace(".pdf", "")
pdf_folder = os.path.join(subject_folder, pdf_name)
os.makedirs(pdf_folder, exist_ok=True)
# Store page results
results = []
# Process pages one by one (memory-efficient method)
for i in range(num_pages):
print(f"Processing page {i+1}/{num_pages}...")
# Convert only one page at a time (memory efficiency)
images = convert_from_path(pdf_path, dpi=300, first_page=i+1, last_page=i+1)
if not images:
print(f"Failed to convert page {i+1}, skipping.")
continue
image = images[0]
# Save image
image_path = os.path.join(pdf_folder, f"page_{i+1}.jpg")
image.save(image_path, "JPEG", quality=85, optimize=True)
# Memory cleanup
del images, image
gc.collect()
# Process image
page_result = self.process_image(image_path)
results.append(page_result)
# Save results
output_path = os.path.join(pdf_folder, f"page_{i+1}.json")
self.save_result(page_result, output_path)
# Upload page results to GCS
gcs_path = f"{subject}/stage1/{pdf_name}/page_{i+1}.json"
self._upload_to_gcs(page_result, gcs_path)
# Memory cleanup
del page_result
gc.collect()
# Create summary results
summary = {
"pdf_name": pdf_name,
"num_pages": num_pages,
"processed_time": datetime.now().isoformat(),
"pages": [{"page": i+1, "status": "processed"} for i in range(num_pages)]
}
# Save summary results
summary_path = os.path.join(pdf_folder, "summary_stage1.json")
self.save_result(summary, summary_path)
# Upload summary results to GCS
gcs_summary_path = f"{subject}/stage1/{pdf_name}/summary_stage1.json"
self._upload_to_gcs(summary, gcs_summary_path)
print(f"PDF processing complete: {pdf_file}")
# Memory cleanup
del results
gc.collect()
return summary
except Exception as e:
print(f"PDF processing error: {e}")
return {"error": str(e)}
def save_result(self, result, output_path):
"""
Save processing results to JSON file
Args:
result (dict): Processing results
output_path (str): Path to save file
"""
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=2)
# (AdvancedOCR class and other code parts use the definitions above)
if __name__ == "__main__":
import argparse
import os
parser = argparse.ArgumentParser(description='Advanced OCR Processing')
# Required argument: --input (accepts both single file or directory)
parser.add_argument('--input', default='/app/input', help='Input file or directory path (image or PDF)')
# Optional argument: --output
parser.add_argument('--output', help='Output JSON file path (for image) or output folder (for PDF)')
parser.add_argument('--model', default=None, help='DocLayout-YOLO model path')
parser.add_argument('--confidence', type=float, default=0.5, help='Detection confidence threshold')
parser.add_argument('--no-cache', action='store_true', help='Disable caching')
parser.add_argument('--cache-dir', default='cache', help='Cache directory path')
args = parser.parse_args()
# Create OCR processing object
ocr = AdvancedOCR(
model_path=args.model,
confidence_threshold=args.confidence,
use_cache=not args.no_cache,
cache_dir=args.cache_dir
)
input_path = args.input
# If input is a directory, process all PDF files in that directory
if os.path.isdir(input_path):
import glob
pdf_files = glob.glob(os.path.join(input_path, "*.pdf"))
if not pdf_files:
print(f"No PDF files found in input directory {input_path}.")
for pdf_file in pdf_files:
# Output file path: create output folder in the directory containing the PDF file
output_folder = args.output if args.output else os.path.join(os.path.dirname(pdf_file), "output")
result = ocr.process_pdf(pdf_file, output_folder)
ocr.save_result(result, os.path.join(output_folder, os.path.splitext(os.path.basename(pdf_file))[0] + ".json"))
print(f"PDF processing complete: {pdf_file} -> {output_folder}")
else:
# For single file: process as PDF or image based on extension
if input_path.lower().endswith('.pdf'):
output_folder = args.output if args.output else os.path.join(os.path.dirname(input_path), "output")
result = ocr.process_pdf(input_path, output_folder)
print(f"PDF processing complete: {input_path} -> {output_folder}")
else:
result = ocr.process_image(input_path)
output_path = args.output if args.output else input_path.rsplit('.', 1)[0] + '.json'
ocr.save_result(result, output_path)
print(f"Image processing complete: {input_path} -> {output_path}")
# To customize output language, modify the prompt strings in the following methods:
# 1. _process_table_region(): Change the prompt text to your desired language
# 2. _process_figure_region(): Change the prompt text to your desired language
# 3. _process_formula_region(): Change the final_text format to your desired language
# 4. Change the language hints in _process_text_region() from ['ja', 'en', 'ko'] to your preferred languages
# IMPORTANT: Do not make any omissions or changes to the content, abbreviations, or code modifications. This code is already working correctly.
================================================
FILE: v2.0_initial/custom_doclayout_yolo.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
custom_doclayout_yolo.py
- Performs document layout analysis using the DocLayout-YOLO model.
- Updated for compatibility with PyTorch 2.0.1 or higher.
- Loads the model using the officially recommended method (hf_hub_download or from_pretrained).
"""
import os
import torch
import logging
from huggingface_hub import hf_hub_download
from doclayout_yolo import YOLOv10
logger = logging.getLogger(__name__)
class DocLayoutYOLO:
"""DocLayout-YOLO model wrapper class"""
def __init__(self, model_path=None):
"""
Initialize the DocLayout-YOLO model
Args:
model_path (str, optional): Local model file path.
If not provided, the pre-trained model will be loaded from Hugging Face Hub.
"""
self.model_path = model_path
self.model = None
self.device = "cuda:0" if torch.cuda.is_available() else "cpu"
self.init_model()
def init_model(self):
"""Initialize the model"""
try:
if self.model_path and os.path.exists(self.model_path):
# Use the local model file if available
logger.info(f"Loading local model file: {self.model_path}")
self.model = YOLOv10(self.model_path)
else:
# If a local file is not available, download and load the pre-trained model from Hugging Face Hub
logger.info("Loading pre-trained model from Hugging Face (using hf_hub_download)")
filepath = hf_hub_download(
repo_id="juliozhao/DocLayout-
gitextract_quvo_ddo/
├── .gitignore
├── LICENSE
├── README.md
├── patch_notes/
│ └── v2.0_initial_patchnotes.md
├── planned_features.md
├── setup_guide.md
├── v1.0_initial/
│ ├── Dockerfile
│ ├── advanced_ocr.py
│ ├── custom_doclayout_yolo.py
│ ├── ocr_stage1.py
│ └── ocr_stage2.py
└── v2.0_initial/
├── Dockerfile
├── advanced_ocr.py
├── custom_doclayout_yolo.py
├── ocr_stage1.py
└── ocr_stage2.py
SYMBOL INDEX (97 symbols across 8 files)
FILE: v1.0_initial/advanced_ocr.py
class AdvancedOCR (line 17) | class AdvancedOCR:
method __init__ (line 18) | def __init__(self, model_path=None, confidence_threshold=0.5, use_cach...
method _setup_gemini_api (line 52) | def _setup_gemini_api(self):
method _setup_gcs_client (line 64) | def _setup_gcs_client(self):
method _calculate_image_hash (line 83) | def _calculate_image_hash(self, image):
method _get_cached_result (line 99) | def _get_cached_result(self, image_hash, cache_type):
method _save_to_cache (line 123) | def _save_to_cache(self, image_hash, cache_type, result):
method _detect_with_doclayout_yolo (line 142) | def _detect_with_doclayout_yolo(self, image_np):
method _merge_overlapping_regions (line 210) | def _merge_overlapping_regions(self, regions):
method _detect_regions (line 277) | def _detect_regions(self, image_np):
method _crop_region (line 304) | def _crop_region(self, image, region):
method _process_text_region (line 323) | def _process_text_region(self, region_img, region_info):
method _process_table_region (line 434) | def _process_table_region(self, region_img, region_info):
method _process_figure_region (line 599) | def _process_figure_region(self, region_img, region_info):
method _process_formula_region (line 763) | def _process_formula_region(self, region_img, region_info):
method _process_title_region (line 840) | def _process_title_region(self, region_img, region_info):
method _process_list_region (line 856) | def _process_list_region(self, region_img, region_info):
method _process_regions (line 872) | def _process_regions(self, image_np, regions):
method _combine_processed_regions (line 927) | def _combine_processed_regions(self, processed_regions):
method _upload_to_gcs (line 945) | def _upload_to_gcs(self, data, gcs_path):
method process_image (line 976) | def process_image(self, image_path):
method process_pdf (line 1019) | def process_pdf(self, pdf_path, output_folder=None):
method save_result (line 1110) | def save_result(self, result, output_path):
FILE: v1.0_initial/custom_doclayout_yolo.py
class DocLayoutYOLO (line 18) | class DocLayoutYOLO:
method __init__ (line 21) | def __init__(self, model_path=None):
method init_model (line 34) | def init_model(self):
method predict (line 68) | def predict(self, image_path, imgsz=1024, conf=0.25, device=None):
FILE: v1.0_initial/ocr_stage1.py
function run_docker_container (line 36) | def run_docker_container(input_dir, output_dir, credentials_dir, image_n...
function main (line 127) | def main():
FILE: v1.0_initial/ocr_stage2.py
function parse_gcs_prefix (line 88) | def parse_gcs_prefix(gcs_url: str) -> Tuple[str, str]:
function load_json_from_gcs (line 105) | def load_json_from_gcs(gcs_url: str) -> Optional[Dict]:
function save_json_to_gcs (line 137) | def save_json_to_gcs(data: Dict, gcs_path: str) -> Optional[str]:
function check_folder_exists (line 164) | def check_folder_exists(folder_path: str) -> bool:
function simplify_special_content_tags (line 184) | def simplify_special_content_tags(text: str) -> str:
function extract_special_content (line 209) | def extract_special_content(text: str) -> Tuple[str, Dict[str, List[Dict...
function restore_special_content (line 248) | def restore_special_content(text: str, special_contents: Dict[str, List[...
function chatgpt_correct_text (line 291) | def chatgpt_correct_text(original_text: str) -> Dict[str, Any]:
function chatgpt_correct_special_content (line 390) | def chatgpt_correct_special_content(content_type: str, content: str) -> ...
function extract_page_number_from_filename (line 406) | def extract_page_number_from_filename(filename: str) -> Optional[int]:
function process_page_stage2 (line 422) | def process_page_stage2(page_data: Dict, original_blob_name: str, folder...
function list_top_level_folders (line 481) | def list_top_level_folders() -> List[str]:
function check_stage1_exists (line 507) | def check_stage1_exists(folder_name: str) -> bool:
function check_stage2_exists (line 520) | def check_stage2_exists(folder_name: str) -> bool:
function list_stage1_subfolders (line 533) | def list_stage1_subfolders(folder_name: str) -> List[str]:
function list_page_blobs (line 564) | def list_page_blobs(folder_name: str, subfolder: str) -> List[Any]:
function process_folder (line 595) | def process_folder(folder_name: str) -> Dict[str, Any]:
function process_all_folders (line 664) | def process_all_folders() -> Dict[str, Dict[str, Any]]:
function main (line 694) | def main():
FILE: v2.0_initial/advanced_ocr.py
class AdvancedOCR (line 18) | class AdvancedOCR:
method __init__ (line 19) | def __init__(self, model_path=None, confidence_threshold=0.5, use_cach...
method _setup_gemini_api (line 53) | def _setup_gemini_api(self):
method _setup_gcs_client (line 65) | def _setup_gcs_client(self):
method _calculate_image_hash (line 84) | def _calculate_image_hash(self, image):
method _get_cached_result (line 104) | def _get_cached_result(self, image_hash, cache_type):
method _save_to_cache (line 128) | def _save_to_cache(self, image_hash, cache_type, result):
method _detect_with_doclayout_yolo (line 147) | def _detect_with_doclayout_yolo(self, image_np):
method _merge_overlapping_regions (line 219) | def _merge_overlapping_regions(self, regions):
method _detect_regions (line 286) | def _detect_regions(self, image_np):
method _crop_region (line 313) | def _crop_region(self, image, region):
method _optimize_image_for_api (line 333) | def _optimize_image_for_api(self, image):
method _process_text_region (line 357) | def _process_text_region(self, region_img, region_info):
method _process_table_region (line 478) | def _process_table_region(self, region_img, region_info):
method _process_figure_region (line 665) | def _process_figure_region(self, region_img, region_info):
method _process_formula_region (line 852) | def _process_formula_region(self, region_img, region_info):
method _process_title_region (line 953) | def _process_title_region(self, region_img, region_info):
method _process_list_region (line 969) | def _process_list_region(self, region_img, region_info):
method _process_regions (line 985) | def _process_regions(self, image_np, regions):
method _combine_processed_regions (line 1044) | def _combine_processed_regions(self, processed_regions):
method _upload_to_gcs (line 1064) | def _upload_to_gcs(self, data, gcs_path):
method process_image (line 1100) | def process_image(self, image_path):
method process_pdf (line 1149) | def process_pdf(self, pdf_path, output_folder=None):
method save_result (line 1260) | def save_result(self, result, output_path):
FILE: v2.0_initial/custom_doclayout_yolo.py
class DocLayoutYOLO (line 18) | class DocLayoutYOLO:
method __init__ (line 21) | def __init__(self, model_path=None):
method init_model (line 34) | def init_model(self):
method predict (line 68) | def predict(self, image_path, imgsz=1024, conf=0.25, device=None):
FILE: v2.0_initial/ocr_stage1.py
function run_docker_container (line 36) | def run_docker_container(input_dir, output_dir, credentials_dir, image_n...
function main (line 126) | def main():
FILE: v2.0_initial/ocr_stage2.py
function parse_gcs_prefix (line 88) | def parse_gcs_prefix(gcs_url: str) -> Tuple[str, str]:
function load_json_from_gcs (line 105) | def load_json_from_gcs(gcs_url: str) -> Optional[Dict]:
function save_json_to_gcs (line 137) | def save_json_to_gcs(data: Dict, gcs_path: str) -> Optional[str]:
function check_folder_exists (line 164) | def check_folder_exists(folder_path: str) -> bool:
function simplify_special_content_tags (line 184) | def simplify_special_content_tags(text: str) -> str:
function extract_special_content (line 209) | def extract_special_content(text: str) -> Tuple[str, Dict[str, List[Dict...
function restore_special_content (line 248) | def restore_special_content(text: str, special_contents: Dict[str, List[...
function chatgpt_correct_text (line 291) | def chatgpt_correct_text(original_text: str) -> Dict[str, Any]:
function chatgpt_correct_special_content (line 390) | def chatgpt_correct_special_content(content_type: str, content: str) -> ...
function extract_page_number_from_filename (line 406) | def extract_page_number_from_filename(filename: str) -> Optional[int]:
function process_page_stage2 (line 422) | def process_page_stage2(page_data: Dict, original_blob_name: str, folder...
function list_top_level_folders (line 481) | def list_top_level_folders() -> List[str]:
function check_stage1_exists (line 507) | def check_stage1_exists(folder_name: str) -> bool:
function check_stage2_exists (line 520) | def check_stage2_exists(folder_name: str) -> bool:
function list_stage1_subfolders (line 533) | def list_stage1_subfolders(folder_name: str) -> List[str]:
function list_page_blobs (line 564) | def list_page_blobs(folder_name: str, subfolder: str) -> List[Any]:
function process_folder (line 595) | def process_folder(folder_name: str) -> Dict[str, Any]:
function process_all_folders (line 664) | def process_all_folders() -> Dict[str, Dict[str, Any]]:
function main (line 694) | def main():
Condensed preview — 16 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (244K chars).
[
{
"path": ".gitignore",
"chars": 3443,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE",
"chars": 35218,
"preview": "Copyright (C) 2025 Eunsoo Seo\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms o"
},
{
"path": "README.md",
"chars": 9483,
"preview": "# OCR System Optimized for Machine Learning: Figures, Diagrams, Tables, Math & Multilingual Text\n---\n\n### 🚀 **COMING SOO"
},
{
"path": "patch_notes/v2.0_initial_patchnotes.md",
"chars": 3241,
"preview": "\n\n# v2.0_initial Update\n**Fix Docker permission instability + optimize memory usage in advanced_ocr.py**\n\n⸻\n\n### Summary"
},
{
"path": "planned_features.md",
"chars": 1428,
"preview": "# Planned Feature \n**Image Embedding via OpenAI CLIP**\n\nImage embedding using OpenAI CLIP will be added alongside the cu"
},
{
"path": "setup_guide.md",
"chars": 4560,
"preview": "# OCR System Setup Guide\n\nThis guide provides step-by-step instructions for setting up the EJU OCR system, including env"
},
{
"path": "v1.0_initial/Dockerfile",
"chars": 3488,
"preview": "###############################################################################\n# Dockerfile for GPU-based Python enviro"
},
{
"path": "v1.0_initial/advanced_ocr.py",
"chars": 44788,
"preview": "import os\nimport cv2\nimport numpy as np\nimport json\nimport time\nimport hashlib\nimport base64\nimport requests\nimport io\ni"
},
{
"path": "v1.0_initial/custom_doclayout_yolo.py",
"chars": 3848,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\ncustom_doclayout_yolo.py\n- Performs document layout analysis using th"
},
{
"path": "v1.0_initial/ocr_stage1.py",
"chars": 6679,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nML OCR System - Docker Container Execution Version for Vertex AI Not"
},
{
"path": "v1.0_initial/ocr_stage2.py",
"chars": 26467,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nocr_stage2_final_fixed.py - ML OCR System Stage 2 (ChatGPT Correctio"
},
{
"path": "v2.0_initial/Dockerfile",
"chars": 3488,
"preview": "###############################################################################\n# Dockerfile for GPU-based Python enviro"
},
{
"path": "v2.0_initial/advanced_ocr.py",
"chars": 50423,
"preview": "import os\nimport cv2\nimport numpy as np\nimport json\nimport time\nimport hashlib\nimport base64\nimport requests\nimport io\ni"
},
{
"path": "v2.0_initial/custom_doclayout_yolo.py",
"chars": 3847,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\ncustom_doclayout_yolo.py\n- Performs document layout analysis using th"
},
{
"path": "v2.0_initial/ocr_stage1.py",
"chars": 6633,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nML OCR System - Docker Container Execution Version for Vertex AI Not"
},
{
"path": "v2.0_initial/ocr_stage2.py",
"chars": 26467,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nocr_stage2_final_fixed.py - ML OCR System Stage 2 (ChatGPT Correctio"
}
]
About this extraction
This page contains the full source code of the ses4255/Versatile-OCR-Program GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 16 files (228.0 KB), approximately 49.4k tokens, and a symbol index with 97 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.