Showing preview only (202K chars total). Download the full file or copy to clipboard to get everything.
Repository: WellyZhang/RAVEN
Branch: master
Commit: 77927ba3fe26
Files: 30
Total size: 192.3 KB
Directory structure:
gitextract_dccjdwzt/
├── .gitignore
├── LICENSE
├── README.md
├── assets/
│ ├── README.md
│ └── embedding.npy
├── requirements.txt
└── src/
├── dataset/
│ ├── AoT.py
│ ├── Attribute.py
│ ├── Rule.py
│ ├── __init__.py
│ ├── api.py
│ ├── build_tree.py
│ ├── const.py
│ ├── constraints.py
│ ├── main.py
│ ├── rendering.py
│ ├── sampling.py
│ ├── serialize.py
│ └── solver.py
└── model/
├── __init__.py
├── basic_model.py
├── cnn_lstm.py
├── cnn_mlp.py
├── const/
│ ├── __init__.py
│ └── const.py
├── fc_tree_net.py
├── main.py
├── resnet18.py
└── utility/
├── __init__.py
└── dataset_utility.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
# vscode
.vscode
# experiments
/experiments
================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
================================================
FILE: README.md
================================================
# RAVEN
This repo contains code for our CVPR 2019 paper.
[RAVEN: A Dataset for <u>R</u>elational and <u>A</u>nalogical <u>V</u>isual r<u>E</u>aso<u>N</u>ing](http://wellyzhang.github.io/attach/cvpr19zhang.pdf)
Chi Zhang*, Feng Gao*, Baoxiong Jia, Yixin Zhu, Song-Chun Zhu
*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019
(* indicates equal contribution.)
Dramatic progress has been witnessed in basic vision tasks involving low-level perception, such as object recognition, detection, and tracking. Unfortunately, there is still an enormous performance gap between artificial vision systems and human intelligence in terms of higher-level vision problems, especially ones involving reasoning. Earlier attempts in equipping machines with high-level reasoning have hovered around Visual Question Answering (VQA), one typical task associating vision and language understanding. In this work, we propose a new dataset, built in the context of Raven's Progressive Matrices (RPM) and aimed at lifting machine intelligence by associating vision with structural, relational, and analogical reasoning in a hierarchical representation. Unlike previous works in measuring abstract reasoning using RPM, we establish a semantic link between vision and reasoning by providing structure representation. This addition enables a new type of abstract reasoning by jointly operating on the structure representation. Machine reasoning ability using modern computer vision is evaluated in this newly proposed dataset. Additionally, we also provide human performance as a reference. Finally, we show consistent improvement across all models by incorporating a simple neural module that combines visual understanding and structure reasoning.

# Dataset
The dataset is generated using the attributed stochastic image grammar. An example is shown below.

The grammatical design makes the dataset flexible and extendable. In total, we come up with 7 different figural configurations.

The dataset formatting document is in ```assets/README.md```. To download the dataset, please check [our project page](http://wellyzhang.github.io/project/raven.html#dataset).
# Performance
We show performance of models in the following table. For details, please check our [paper](http://wellyzhang.github.io/attach/cvpr19zhang.pdf).
| Method | Acc | Center | 2x2Grid | 3x3Grid | L-R | U-D | O-IC | O-IG |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| LSTM | 13.07% | 13.19% | 14.13% | 13.69% | 12.84% | 12.35% | 12.15% | 12.99% |
| WReN | 14.69% | 13.09% | 28.62% | 28.27% | 7.49% | 6.34% | 8.38% | 10.56% |
| CNN | 36.97% | 33.58% | 30.30% | 33.53% | 39.43% | 41.26% | 43.20% | 37.54% |
| ResNet | 53.43% | 52.82% | 41.86% | 44.29% | 58.77% | 60.16% | 63.19% | 53.12% |
| LSTM+DRT | 13.96% | 14.29% | 15.08% | 14.09% | 13.79% | 13.24% | 13.99% | 13.29% |
| WReN+DRT | 15.02% | 15.38% | 23.26% | 29.51% | 6.99% | 8.43% | 8.93% | 12.35% |
| CNN+DRT | 39.42% | 37.30% | 30.06% | 34.57% | 45.49% | 45.54% | 45.93% | 37.54% |
| ResNet+DRT | **59.56%** | **58.08%** | **46.53%** | **50.40%** | **65.82%** | **67.11%** | **69.09%** | **60.11%** |
| Human | 84.41% | 95.45% | 81.82% | 79.55% | 86.36% | 81.81% | 86.36% | 81.81% |
| Solver | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
# Dependencies
**Important**
* Python 2.7
* OpenCV
* PyTorch
* CUDA and cuDNN expected
See ```requirements.txt``` for a full list of packages required.
# Usage
## Dataset Generation
Code to generate the dataset resides in the ```src/dataset``` folder. To generate a dataset, run
```
python src/dataset/main.py --num-samples <number of samples per configuration> --save-dir <directory to save the dataset>
```
Check the ```main.py``` file for a full list of arguments you can adjust.
## Benchmarking
Code to benchmark the dataset resides in ```src/model```. To run the code, first put ```assets/embedding.npy``` in the dataset folder as specified in the ```src/model/utility/dataset_utility.py```. Then run
```
python src/model/main.py --model <model name> --path <path to the dataset>
```
You can check the ```main.py``` file for a full list of arguments. This repo only supports ```Resnet18_MLP```, ```CNN_MLP```, and ```CNN_LSTM```. For WReN, please check the implementation in [the WReN repo](https://github.com/Fen9/WReN).
Note that for batch processing, we implement the DRT as a maximum tree of all possible tree structures and prune the branches during training based on an indicator.
# Citation
If you find the paper and/or the code helpful, please cite us.
```
@inproceedings{zhang2019raven,
title={RAVEN: A Dataset for Relational and Analogical Visual rEasoNing},
author={Zhang, Chi and Gao, Feng and Jia, Baoxiong and Zhu, Yixin and Zhu, Song-Chun},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
```
# Acknowledgement
We'd like to express our gratitude towards all the colleagues and anonymous reviewers for helping us improve the paper. The project is impossible to finish without the following open-source implementation.
* [WReN](https://github.com/Fen9/WReN)
================================================
FILE: assets/README.md
================================================
# Dataset Format
The dataset folder is organized as follows:
```
center_single/
RAVEN_0_train.npz
RAVEN_0_train.xml
...
RAVEN_6_val.npz
RAVEN_6_val.xml
...
RAVEN_8_test.npz
RAVEN_8_test.xml
...
distribute_four/
...
distribute_nine/
...
in_center_single_out_center_single/
...
in_distribute_four_out_center_single/
...
left_center_single_right_center_single/
...
up_center_single_down_center_single/
...
```
Note that each npz file comes with an xml file.
These 7 folders correspond to the 7 figure configurations in the paper. Specifically,
* Center = center_single
* 2x2Grid = distribute_four
* 3x3Grid = distribute_nine
* Left-Right = left_center_single_right_center_single
* Up-Down = up_center_single_down_center_single
* Out-InCenter = in_center_single_out_center_single
* Out-InGrid = in_distribute_four_out_center_single
## Naming
You might notice that the actual naming in this dataset is slightly different from what's reported in our paper. This is mostly due to the fact that things like **2x2** or **3x3** do not have corresponding word vectors. They are now **distribute_four** and **distribute_nine**. To make the paper concise, we also remove certain adjectives. **Center** was **Center_Single** and sometimes came with a component name.
As described in the paper, embeddings for each of them are obtained from pre-trained GloVe vectors and held fixed during training.
## NPZ file
Each npz file contains the following:
* image: a (16, 160, 160) array where all 16 figures in each problem are stacked on the first dimension. Note that first 8 figures compose the problem matrix and the last 8 figures are choices.
* target: the index of the correct answer in the answer set. Note that it starts from 0 and you should offset it by 8 if you want to retrieve it from the image array.
* structure: the tree structure annotation for the problem. It's serialized into a sequence using pre-order traversal.
* meta_matrix: similar to that in PGM. Detailed ordering could be found in ```src/dataset/const.py```.
* meta_target: bitwise-or of meta_matrix on all rows.
* meta_structure: it's similar to meta_matrix. Detailed ordering is in ```src/dataset/const.py```.
## XML file
Each xml file contains the following:
* Context panels and choice panels: each Panel could be further decomposed into Struct, Component, Layout, and Entity.
* Each layer comes with its name and id if necessary.
* Layout has its own attributes, whose values are indices into the value set (see also ```src/dataset/const.py```), except Position. Position is a list of slots entities could occupy, denoted by center and width/height.
* Entity's attributes follow the same annotation. The bbox is retrieved from the Position array in its parent Layout and the real_bbox is the actual bounding box, denoted by center and width/height. The mask is encoded using the run-length encoding. To decode it, use the ```rle_decode``` function in ```src/dataset/api.py```.
* Rules: rules are divided into groups, each of which applies to the corresponding component with the same id number.
* ```attr``` could be ```Number/Position``` when the rule is ```Constant``` as these two attributes are deeply coupled.
* When there is a rule on ```Number``` or ```Position```, we omit the rule on the other attribute, as it should be assumed **as is**, *i.e.*, following the rule on the other (could remain unchanged).
* Therefore, each rule group has 4 rules.
================================================
FILE: requirements.txt
================================================
numpy
scipy
matplotlib
pillow
scikit-image
opencv-contrib-python
tqdm
torch
torchvision
================================================
FILE: src/dataset/AoT.py
================================================
# -*- coding: utf-8 -*-
import copy
import numpy as np
from scipy.misc import comb
from Attribute import Angle, Color, Number, Position, Size, Type, Uniformity
from constraints import rule_constraint
class AoTNode(object):
"""Superclass of AoT.
"""
levels_next = {"Root": "Structure",
"Structure": "Component",
"Component": "Layout",
"Layout": "Entity"}
def __init__(self, name, level, node_type, is_pg=False):
self.name = name
self.level = level
self.node_type = node_type
self.children = []
self.is_pg = is_pg
def insert(self, node):
"""Used for public.
Arguments:
node(AoTNode): a node to insert
"""
assert isinstance(node, AoTNode)
assert self.node_type != "leaf"
assert node.level == self.levels_next[self.level]
self.children.append(node)
def _insert(self, node):
"""Used for private.
Arguments:
node(AoTNode): a node to insert
"""
assert isinstance(node, AoTNode)
assert self.node_type != "leaf"
assert node.level == self.levels_next[self.level]
self.children.append(node)
def _resample(self, change_number):
"""Resample the layout. If the number of entities change, resample also the
position distribution; otherwise only resample each attribute for each entity.
Arugments:
change_number(bool): whether to the number has been reset
"""
assert self.is_pg
if self.node_type == "and":
for child in self.children:
child._resample(change_number)
else:
self.children[0]._resample(change_number)
def __repr__(self):
return self.level + "." + self.name
def __str__(self):
return self.level + "." + self.name
class Root(AoTNode):
def __init__(self, name, is_pg=False):
super(Root, self).__init__(name, level="Root", node_type="or", is_pg=is_pg)
def sample(self):
"""The function returns a separate AoT that is correctly parsed.
Note that a new node is needed so that modification does not alter settings
in the original tree.
Returns:
new_node(Root): a newly instantiated node
"""
if self.is_pg:
raise ValueError("Could not sample on a PG")
new_node = Root(self.name, True)
selected = np.random.choice(self.children)
new_node.insert(selected._sample())
return new_node
def resample(self, change_number=False):
self._resample(change_number)
def prune(self, rule_groups):
"""Prune the AoT such that all branches satisfy the constraints.
Arguments:
rule_groups(list of list of Rule): each list of Rule applies to a component
Returns:
new_node(Root): a newly instantiated node with branches all satisfying the constraints;
None if no branches satisfy all the constraints
"""
new_node = Root(self.name)
for structure in self.children:
if len(structure.children) == len(rule_groups):
new_child = structure._prune(rule_groups)
if new_child is not None:
new_node.insert(new_child)
# during real execution, this should never happens
if len(new_node.children) == 0:
new_node = None
return new_node
def prepare(self):
"""This function prepares the AoT for rendering.
Returns:
structure.name(str): used for rendering structure
entities(list of Entity): used for rendering each entity
"""
assert self.is_pg
assert self.level == "Root"
structure = self.children[0]
components = []
for child in structure.children:
components.append(child)
entities = []
for component in components:
for child in component.children[0].children:
entities.append(child)
return structure.name, entities
def sample_new(self, component_idx, attr_name, min_level, max_level, root):
"""Sample a new configuration. This is used for generating answers.
Arguments:
component_idx(int): the component we will sample
attr_name(str): name of the attribute to sample
min_level(int): lower bound of value level for the attribute
max_level(int): upper bound of value level for the attribute
root(AoTNode): the answer AoT, used for storing previous value levels for each attribute
"""
assert self.is_pg
self.children[0]._sample_new(component_idx, attr_name, min_level, max_level, root.children[0])
class Structure(AoTNode):
def __init__(self, name, is_pg=False):
super(Structure, self).__init__(name, level="Structure", node_type="and", is_pg=is_pg)
def _sample(self):
if self.is_pg:
raise ValueError("Could not sample on a PG")
new_node = Structure(self.name, True)
for child in self.children:
new_node.insert(child._sample())
return new_node
def _prune(self, rule_groups):
new_node = Structure(self.name)
for i in range(len(self.children)):
child = self.children[i]
# if any of the components fails to satisfy the constraint
# the structure could not be chosen
new_child = child._prune(rule_groups[i])
if new_child is None:
return None
new_node.insert(new_child)
return new_node
def _sample_new(self, component_idx, attr_name, min_level, max_level, structure):
self.children[component_idx]._sample_new(attr_name, min_level, max_level, structure.children[component_idx])
class Component(AoTNode):
def __init__(self, name, is_pg=False):
super(Component, self).__init__(name, level="Component", node_type="or", is_pg=is_pg)
def _sample(self):
if self.is_pg:
raise ValueError("Could not sample on a PG")
new_node = Component(self.name, True)
selected = np.random.choice(self.children)
new_node.insert(selected._sample())
return new_node
def _prune(self, rule_group):
new_node = Component(self.name)
for child in self.children:
new_child = child._update_constraint(rule_group)
if new_child is not None:
new_node.insert(new_child)
if len(new_node.children) == 0:
new_node = None
return new_node
def _sample_new(self, attr_name, min_level, max_level, component):
self.children[0]._sample_new(attr_name, min_level, max_level, component.children[0])
class Layout(AoTNode):
"""Layout is the highest level of the hierarchy that has attributes (Number, Position and Uniformity).
To copy a Layout, please use deepcopy such that newly instantiated and separated attributes are created.
"""
def __init__(self, name, layout_constraint, entity_constraint,
orig_layout_constraint=None, orig_entity_constraint=None,
sample_new_num_count=None, is_pg=False):
super(Layout, self).__init__(name, level="Layout", node_type="and", is_pg=is_pg)
self.layout_constraint = layout_constraint
self.entity_constraint = entity_constraint
self.number = Number(min_level=layout_constraint["Number"][0], max_level=layout_constraint["Number"][1])
self.position = Position(pos_type=layout_constraint["Position"][0], pos_list=layout_constraint["Position"][1])
self.uniformity = Uniformity(min_level=layout_constraint["Uni"][0], max_level=layout_constraint["Uni"][1])
self.number.sample()
self.position.sample(self.number.get_value())
self.uniformity.sample()
# store initial layout_constraint and entity_constraint for answer generation
if orig_layout_constraint is None:
self.orig_layout_constraint = copy.deepcopy(self.layout_constraint)
else:
self.orig_layout_constraint = orig_layout_constraint
if orig_entity_constraint is None:
self.orig_entity_constraint = copy.deepcopy(self.entity_constraint)
else:
self.orig_entity_constraint = orig_entity_constraint
if sample_new_num_count is None:
self.sample_new_num_count = dict()
most_num = len(self.position.values)
for i in range(layout_constraint["Number"][0], layout_constraint["Number"][1] + 1):
self.sample_new_num_count[i] = [comb(most_num, i + 1), []]
else:
self.sample_new_num_count = sample_new_num_count
def add_new(self, *bboxes):
"""Add new entities into this level.
Arguments:
*bboxes(tuple of bbox): bboxes of new entities
"""
name = self.number.get_value()
uni = self.uniformity.get_value()
for i in range(len(bboxes)):
name += i
bbox = bboxes[i]
new_entity = copy.deepcopy(self.children[0])
new_entity.name = str(name)
new_entity.bbox = bbox
if not uni:
new_entity.resample()
self._insert(new_entity)
def resample(self, change_number=False):
self._resample(change_number)
def _sample(self):
"""Though Layout is an "and" node, we do not enumerate all possible configurations, but rather
we treat it as a sampling process such that different configurtions are sampled. After the
sampling, the lower level Entities are instantiated.
Returns:
new_node(Layout): a separated node with independent attributes
"""
pos = self.position.get_value()
new_node = copy.deepcopy(self)
new_node.is_pg = True
if self.uniformity.get_value():
node = Entity(name=str(0), bbox=pos[0], entity_constraint=self.entity_constraint)
new_node._insert(node)
for i in range(1, len(pos)):
bbox = pos[i]
node = copy.deepcopy(node)
node.name = str(i)
node.bbox = bbox
new_node._insert(node)
else:
for i in range(len(pos)):
bbox = pos[i]
node = Entity(name=str(i), bbox=bbox, entity_constraint=self.entity_constraint)
new_node._insert(node)
return new_node
def _resample(self, change_number):
"""Resample each attribute for every child.
This function is called across rows.
Arguments:
change_number(bool): whether to resample a number
"""
if change_number:
self.number.sample()
del self.children[:]
self.position.sample(self.number.get_value())
pos = self.position.get_value()
if self.uniformity.get_value():
node = Entity(name=str(0), bbox=pos[0], entity_constraint=self.entity_constraint)
self._insert(node)
for i in range(1, len(pos)):
bbox = pos[i]
node = copy.deepcopy(node)
node.name = str(i)
node.bbox = bbox
self._insert(node)
else:
for i in range(len(pos)):
bbox = pos[i]
node = Entity(name=str(i), bbox=bbox, entity_constraint=self.entity_constraint)
self._insert(node)
def _update_constraint(self, rule_group):
"""Update the constraint of the layout. If one constraint is not satisfied, return None
such that this structure is disgarded.
Arguments:
rule_group(list of Rule): all rules to apply to this layout
Returns:
Layout(Layout): a new Layout node with independent attributes
"""
num_min = self.layout_constraint["Number"][0]
num_max = self.layout_constraint["Number"][1]
uni_min = self.layout_constraint["Uni"][0]
uni_max = self.layout_constraint["Uni"][1]
type_min = self.entity_constraint["Type"][0]
type_max = self.entity_constraint["Type"][1]
size_min = self.entity_constraint["Size"][0]
size_max = self.entity_constraint["Size"][1]
color_min = self.entity_constraint["Color"][0]
color_max = self.entity_constraint["Color"][1]
new_constraints = rule_constraint(rule_group, num_min, num_max,
uni_min, uni_max,
type_min, type_max,
size_min, size_max,
color_min, color_max)
new_layout_constraint, new_entity_constraint = new_constraints
new_num_min = new_layout_constraint["Number"][0]
new_num_max = new_layout_constraint["Number"][1]
if new_num_min > new_num_max:
return None
new_uni_min = new_layout_constraint["Uni"][0]
new_uni_max = new_layout_constraint["Uni"][1]
if new_uni_min > new_uni_max:
return None
new_type_min = new_entity_constraint["Type"][0]
new_type_max = new_entity_constraint["Type"][1]
if new_type_min > new_type_max:
return None
new_size_min = new_entity_constraint["Size"][0]
new_size_max = new_entity_constraint["Size"][1]
if new_size_min > new_size_max:
return None
new_color_min = new_entity_constraint["Color"][0]
new_color_max = new_entity_constraint["Color"][1]
if new_color_min > new_color_max:
return None
new_layout_constraint = copy.deepcopy(self.layout_constraint)
new_layout_constraint["Number"][:] = [new_num_min, new_num_max]
new_layout_constraint["Uni"][:] = [new_uni_min, new_uni_max]
new_entity_constraint = copy.deepcopy(self.entity_constraint)
new_entity_constraint["Type"][:] = [new_type_min, new_type_max]
new_entity_constraint["Size"][:] = [new_size_min, new_size_max]
new_entity_constraint["Color"][:] = [new_color_min, new_color_max]
return Layout(self.name, new_layout_constraint, new_entity_constraint,
self.orig_layout_constraint, self.orig_entity_constraint,
self.sample_new_num_count)
def reset_constraint(self, attr):
attr_name = attr.lower()
instance = getattr(self, attr_name)
instance.min_level = self.layout_constraint[attr][0]
instance.max_level = self.layout_constraint[attr][1]
def _sample_new(self, attr_name, min_level, max_level, layout):
if attr_name == "Number":
while True:
value_level = self.number.sample_new(min_level, max_level)
if layout.sample_new_num_count[value_level][0] == 0:
continue
new_num = self.number.get_value(value_level)
new_value_idx = self.position.sample_new(new_num)
set_new_value_idx = set(new_value_idx)
if set_new_value_idx not in layout.sample_new_num_count[value_level][1]:
layout.sample_new_num_count[value_level][0] -= 1
layout.sample_new_num_count[value_level][1].append(set_new_value_idx)
break
self.number.set_value_level(value_level)
self.position.set_value_idx(new_value_idx)
pos = self.position.get_value()
del self.children[:]
for i in range(len(pos)):
bbox = pos[i]
node = Entity(name=str(i), bbox=bbox, entity_constraint=self.entity_constraint)
self._insert(node)
elif attr_name == "Position":
new_value_idx = self.position.sample_new(self.number.get_value())
layout.position.previous_values.append(new_value_idx)
self.position.set_value_idx(new_value_idx)
pos = self.position.get_value()
for i in range(len(pos)):
bbox = pos[i]
self.children[i].bbox = bbox
elif attr_name == "Type":
for index in range(len(self.children)):
new_value_level = self.children[index].type.sample_new(min_level, max_level)
self.children[index].type.set_value_level(new_value_level)
layout.children[index].type.previous_values.append(new_value_level)
elif attr_name == "Size":
for index in range(len(self.children)):
new_value_level = self.children[index].size.sample_new(min_level, max_level)
self.children[index].size.set_value_level(new_value_level)
layout.children[index].size.previous_values.append(new_value_level)
elif attr_name == "Color":
for index in range(len(self.children)):
new_value_level = self.children[index].color.sample_new(min_level, max_level)
self.children[index].color.set_value_level(new_value_level)
layout.children[index].color.previous_values.append(new_value_level)
else:
raise ValueError("Unsupported operation")
class Entity(AoTNode):
def __init__(self, name, bbox, entity_constraint):
super(Entity, self).__init__(name, level="Entity", node_type="leaf", is_pg=True)
# Attributes
# Sample each attribute such that the value lies in the admissible range
# Otherwise, random sample
self.entity_constraint = entity_constraint
self.bbox = bbox
self.type = Type(min_level=entity_constraint["Type"][0], max_level=entity_constraint["Type"][1])
self.type.sample()
self.size = Size(min_level=entity_constraint["Size"][0], max_level=entity_constraint["Size"][1])
self.size.sample()
self.color = Color(min_level=entity_constraint["Color"][0], max_level=entity_constraint["Color"][1])
self.color.sample()
self.angle = Angle(min_level=entity_constraint["Angle"][0], max_level=entity_constraint["Angle"][1])
self.angle.sample()
def reset_constraint(self, attr, min_level, max_level):
attr_name = attr.lower()
self.entity_constraint[attr][:] = [min_level, max_level]
instance = getattr(self, attr_name)
instance.min_level = min_level
instance.max_level = max_level
def resample(self):
self.type.sample()
self.size.sample()
self.color.sample()
self.angle.sample()
================================================
FILE: src/dataset/Attribute.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
from const import (ANGLE_MAX, ANGLE_MIN, ANGLE_VALUES, COLOR_MAX, COLOR_MIN,
COLOR_VALUES, NUM_MAX, NUM_MIN, NUM_VALUES, SIZE_MAX,
SIZE_MIN, SIZE_VALUES, TYPE_MAX, TYPE_MIN, TYPE_VALUES,
UNI_MAX, UNI_MIN, UNI_VALUES)
class Attribute(object):
"""Super-class for all attributes. This should not be instantiated.
In the sub-class, each attribute should have a pre-defined value set
and a member to indicate the index in the value set. This design enables
setting a value by modifying the index only. Also, each instance should
come with value index boundaries, set as min_level and max_level. Boundaries
are good when we want to set constraints on the value set.
Before accessing the value, we should sample a value level by calling
the sample function.
"""
def __init__(self, name):
self.name = name
self.level = "Attribute"
# memory to store previous values
self.previous_values = []
def sample(self):
pass
def get_value(self):
pass
def set_value(self):
pass
def __repr__(self):
return self.level + "." + self.name
def __str__(self):
return self.level + "." + self.name
class Number(Attribute):
def __init__(self, min_level=NUM_MIN, max_level=NUM_MAX):
super(Number, self).__init__("Number")
self.value_level = 0
self.values = NUM_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self, min_level=NUM_MIN, max_level=NUM_MAX):
# min_level: min level index
# max_level: max level index
min_level = max(self.min_level, min_level)
max_level = min(self.max_level, max_level)
self.value_level = np.random.choice(range(min_level, max_level + 1))
def sample_new(self, min_level=None, max_level=None, previous_values=None):
"""Sample new values for generating the answer set.
Returns:
new_idx(int): a new value_level
"""
if min_level is None or max_level is None:
values = range(self.min_level, self.max_level + 1)
else:
values = range(min_level, max_level + 1)
if not previous_values:
available = set(values) - set(self.previous_values) - set([self.value_level])
else:
available = set(values) - set(previous_values) - set([self.value_level])
new_idx = np.random.choice(list(available))
return new_idx
def get_value_level(self):
return self.value_level
def set_value_level(self, value_level):
self.value_level = value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Type(Attribute):
def __init__(self, min_level=TYPE_MIN, max_level=TYPE_MAX):
super(Type, self).__init__("Type")
self.value_level = 0
self.values = TYPE_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self, min_level=TYPE_MIN, max_level=TYPE_MAX):
min_level = max(self.min_level, min_level)
max_level = min(self.max_level, max_level)
self.value_level = np.random.choice(range(min_level, max_level + 1))
def sample_new(self, min_level=None, max_level=None, previous_values=None):
if min_level is None or max_level is None:
values = range(self.min_level, self.max_level + 1)
else:
values = range(min_level, max_level + 1)
if not previous_values:
available = set(values) - set(self.previous_values) - set([self.value_level])
else:
available = set(values) - set(previous_values) - set([self.value_level])
new_idx = np.random.choice(list(available))
return new_idx
def get_value_level(self):
return self.value_level
def set_value_level(self, value_level):
self.value_level = value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Size(Attribute):
def __init__(self, min_level=SIZE_MIN, max_level=SIZE_MAX):
super(Size, self).__init__("Size")
self.value_level = 3
self.values = SIZE_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self, min_level=SIZE_MIN, max_level=SIZE_MAX):
min_level = max(self.min_level, min_level)
max_level = min(self.max_level, max_level)
self.value_level = np.random.choice(range(min_level, max_level + 1))
def sample_new(self, min_level=None, max_level=None, previous_values=None):
if min_level is None or max_level is None:
values = range(self.min_level, self.max_level + 1)
else:
values = range(min_level, max_level + 1)
if not previous_values:
available = set(values) - set(self.previous_values) - set([self.value_level])
else:
available = set(values) - set(previous_values) - set([self.value_level])
new_idx = np.random.choice(list(available))
return new_idx
def get_value_level(self):
return self.value_level
def set_value_level(self, value_level):
self.value_level = value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Color(Attribute):
def __init__(self, min_level=COLOR_MIN, max_level=COLOR_MAX):
super(Color, self).__init__("Color")
self.value_level = 0
self.values = COLOR_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self, min_level=COLOR_MIN, max_level=COLOR_MAX):
min_level = max(self.min_level, min_level)
max_level = min(self.max_level, max_level)
self.value_level = np.random.choice(range(min_level, max_level + 1))
def sample_new(self, min_level=None, max_level=None, previous_values=None):
if min_level is None or max_level is None:
values = range(self.min_level, self.max_level + 1)
else:
values = range(min_level, max_level + 1)
if not previous_values:
available = set(values) - set(self.previous_values) - set([self.value_level])
else:
available = set(values) - set(previous_values) - set([self.value_level])
new_idx = np.random.choice(list(available))
return new_idx
def get_value_level(self):
return self.value_level
def set_value_level(self, value_level):
self.value_level = value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Angle(Attribute):
def __init__(self, min_level=ANGLE_MIN, max_level=ANGLE_MAX):
super(Angle, self).__init__("Angle")
self.value_level = 3
self.values = ANGLE_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self, min_level=ANGLE_MIN, max_level=ANGLE_MAX):
min_level = max(self.min_level, min_level)
max_level = min(self.max_level, max_level)
self.value_level = np.random.choice(range(min_level, max_level + 1))
def sample_new(self, min_level=None, max_level=None, previous_values=None):
if min_level is None or max_level is None:
values = range(self.min_level, self.max_level + 1)
else:
values = range(min_level, max_level + 1)
if not previous_values:
available = set(values) - set(self.previous_values) - set([self.value_level])
else:
available = set(values) - set(previous_values) - set([self.value_level])
new_idx = np.random.choice(list(available))
return new_idx
def get_value_level(self):
return self.value_level
def set_value_level(self, value_level):
self.value_level = value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Uniformity(Attribute):
def __init__(self, min_level=UNI_MIN, max_level=UNI_MAX):
super(Uniformity, self).__init__("Uniformity")
self.value_level = 0
self.values = UNI_VALUES
self.min_level = min_level
self.max_level = max_level
def sample(self):
self.value_level = np.random.choice(range(self.min_level, self.max_level + 1))
def sample_new(self):
# Should not resample uniformity
pass
def set_value_level(self, value_level):
self.value_level = value_level
def get_value_level(self):
return self.value_level
def get_value(self, value_level=None):
if value_level is None:
value_level = self.value_level
return self.values[value_level]
class Position(Attribute):
"""Position is a special case. There are the planar position and
the angular position. Planar position allows translation in the plane
while angular Position performs roration around an axis penperdicular to the plane.
"""
def __init__(self, pos_type, pos_list):
"""Instantiate the Position attribute by passing a position type
and a pre-defined position distribution on the plane. This attribute
is strongly coupled with Number and hence value index boundaries are
not needed.
Arguments:
pos_type(str): either "planar" or "angular
pos_list(list of list of numbers): actual distribution on the plane
"""
super(Position, self).__init__("Position")
# planar: [x_c, y_c, max_w, max_h]
# angular: [x_c, y_c, max_w, max_h, x_r, y_r, omega]
assert pos_type in ("planar", "angular")
self.pos_type = pos_type
self.values = pos_list
self.value_idx = None
def sample(self, num):
"""Sample multiple positions at the same time.
Arguments:
num(int): the number of positions to sample
"""
length = len(self.values)
assert num <= length
self.value_idx = np.random.choice(range(length), num, False)
def sample_new(self, num, previous_values=None):
# Here sample new relies on probability
length = len(self.values)
if not previous_values:
constraints = self.previous_values
else:
constraints = previous_values
while True:
finished = True
new_value_idx = np.random.choice(length, num, False)
if set(new_value_idx) == set(self.value_idx):
continue
for previous_value in constraints:
if set(new_value_idx) == set(previous_value):
finished = False
break
if finished:
break
return new_value_idx
def sample_add(self, num):
"""Sample additional number of positions.
Arguments:
num(int): the number of additional positions to sample
Returns:
ret(tuple of position): new positions to add to the layout
"""
ret = []
available = set(range(len(self.values))) - set(self.value_idx)
idxes_2_add = np.random.choice(list(available), num, False)
for index in idxes_2_add:
self.value_idx = np.insert(self.value_idx, 0, index)
ret.append(self.values[index])
return ret
def get_value_idx(self):
return self.value_idx
def set_value_idx(self, value_idx):
# Note that after sampling self.value_idx is a Numpy array
self.value_idx = value_idx
def get_value(self, value_idx=None):
if value_idx is None:
value_idx = self.value_idx
ret = []
for idx in value_idx:
ret.append(self.values[idx])
return ret
def remove(self, bbox):
# Note that after sampling self.value_idx is a Numpy array
idx = self.values.index(bbox)
np_idx = np.where(self.value_idx == idx)[0][0]
self.value_idx = np.delete(self.value_idx, np_idx)
================================================
FILE: src/dataset/Rule.py
================================================
# -*- coding: utf-8 -*-
import copy
import numpy as np
from const import COLOR_MAX, COLOR_MIN
def Rule_Wrapper(name, attr, param, component_idx):
ret = None
if name == "Constant":
ret = Constant(name, attr, param, component_idx)
elif name == "Progression":
ret = Progression(name, attr, param, component_idx)
elif name == "Arithmetic":
ret = Arithmetic(name, attr, param, component_idx)
elif name == "Distribute_Three":
ret = Distribute_Three(name, attr, param, component_idx)
else:
raise ValueError("Unsupported Rule")
return ret
class Rule(object):
"""General API for a rule.
Priority order: Rule on Number/Position always comes first
"""
def __init__(self, name, attr, params, component_idx=0):
"""Instantiate a rule by its name, attribute, paramter list and the component it applies to.
Each rule should be applied to all entities in a component.
Arguments:
name(str): pre-defined name of the rule
attr(str): pre-defined name of the attribute
params(list): a list of possible parameters for it to sample
component_idx(int): the index of the component to apply the rule
"""
self.name = name
self.attr = attr
self.params = params
self.component_idx = component_idx
self.value = 0
self.sample()
def sample(self):
"""Sample a parameter from the parameter list.
"""
if self.params is not None:
self.value = np.random.choice(self.params)
def apply_rule(self, aot, in_aot=None):
"""Apply the rule to a component in the AoT.
Arguments:
aot(AoTNode): an AoT for reference
in_aot(AoTNode): an AoT to apply the rule
Returns:
second_aot(AoTNode): a modified AoT
"""
# Root -> Structure -> Component -> Layout -> Entity
pass
class Constant(Rule):
"""Unary operator. Nothing changes.
"""
def __init__(self, name, attr, param, component_idx):
super(Constant, self).__init__(name, attr, param, component_idx)
def apply_rule(self, aot, in_aot=None):
if in_aot is None:
in_aot = aot
return copy.deepcopy(in_aot)
class Progression(Rule):
"""Unary operator. Attribute difference on two consequetive Panels remains the same.
"""
def __init__(self, name, attr, param, component_idx):
super(Progression, self).__init__(name, attr, param, component_idx)
# Flag to trigger consistency of the attribute in the first column
self.first_col = True
def apply_rule(self, aot, in_aot=None):
current_layout = aot.children[0].children[self.component_idx].children[0]
if in_aot is None:
in_aot = aot
second_aot = copy.deepcopy(in_aot)
second_layout = second_aot.children[0].children[self.component_idx].children[0]
if self.attr == "Number":
second_layout.number.set_value_level(second_layout.number.get_value_level() + self.value)
second_layout.position.sample(second_layout.number.get_value())
pos = second_layout.position.get_value()
del second_layout.children[:]
for i in range(len(pos)):
entity = copy.deepcopy(current_layout.children[0])
entity.name = str(i)
entity.bbox = pos[i]
if not current_layout.uniformity.get_value():
entity.resample()
second_layout.insert(entity)
elif self.attr == "Position":
second_pos_idx = (second_layout.position.get_value_idx() + self.value) % len(second_layout.position.values)
second_layout.position.set_value_idx(second_pos_idx)
second_bbox = second_layout.position.get_value()
for i in range(len(second_bbox)):
second_layout.children[i].bbox = second_bbox[i]
elif self.attr == "Type":
old_value_level = current_layout.children[0].type.get_value_level()
# enforce value consistency
if self.first_col and not current_layout.uniformity.get_value():
for entity in current_layout.children:
entity.type.set_value_level(old_value_level)
for entity in second_layout.children:
entity.type.set_value_level(old_value_level + self.value)
elif self.attr == "Size":
old_value_level = current_layout.children[0].size.get_value_level()
# enforce value consistency
if self.first_col and not current_layout.uniformity.get_value():
for entity in current_layout.children:
entity.size.set_value_level(old_value_level)
for entity in second_layout.children:
entity.size.set_value_level(old_value_level + self.value)
elif self.attr == "Color":
old_value_level = current_layout.children[0].color.get_value_level()
# enforce value consistency
if self.first_col and not current_layout.uniformity.get_value():
for entity in current_layout.children:
entity.color.set_value_level(old_value_level)
for entity in second_layout.children:
entity.color.set_value_level(old_value_level + self.value)
else:
raise ValueError("Unsupported attriubute")
self.first_col = not self.first_col
return second_aot
class Arithmetic(Rule):
"""Binary operator. Basically: Panel_3 = Panel_1 + Panel_2.
For Position: + means SET_UNION and - SET_DIFF.
"""
def __init__(self, name, attr, param, component_idx):
super(Arithmetic, self).__init__(name, attr, param, component_idx)
self.memory = []
self.color_count = 0
self.color_white_alarm = False
def apply_rule(self, aot, in_aot=None):
current_layout = aot.children[0].children[self.component_idx].children[0]
if in_aot is None:
in_aot = aot
second_aot = copy.deepcopy(in_aot)
second_layout = second_aot.children[0].children[self.component_idx].children[0]
if self.attr == "Number":
# the third col
if len(self.memory) > 0:
first_layout_number_level = self.memory.pop()
if self.value > 0:
total = first_layout_number_level + 1 + current_layout.number.get_value()
else:
total = first_layout_number_level + 1 - current_layout.number.get_value()
second_layout.number.set_value_level(total - 1)
# the second col
else:
old_value_level = current_layout.number.get_value_level()
self.memory.append(old_value_level)
if self.value > 0:
num_max_level_orig = sum(current_layout.layout_constraint["Number"]) + 1
new_num_max_level = num_max_level_orig - old_value_level - 1
second_layout.layout_constraint["Number"][1] = new_num_max_level
else:
num_min_level_orig = (second_layout.layout_constraint["Number"][0] - 1) / 2
new_num_max_level = old_value_level - num_min_level_orig - 1
second_layout.layout_constraint["Number"][:] = [num_min_level_orig, new_num_max_level]
second_layout.reset_constraint("Number")
second_layout.number.sample()
second_layout.position.sample(second_layout.number.get_value())
pos = second_layout.position.get_value()
del second_layout.children[:]
for i in range(len(pos)):
entity = copy.deepcopy(current_layout.children[0])
entity.name = str(i)
entity.bbox = pos[i]
if not current_layout.uniformity.get_value():
entity.resample()
second_layout.insert(entity)
elif self.attr == "Position":
# ADD is interpreted as SET_UNION; SUB is interpreted as SET_DIFF
# the third col
if len(self.memory) > 0:
first_layout_value_idx = self.memory.pop()
if self.value > 0:
new_pos_idx = set(first_layout_value_idx) | set(current_layout.position.get_value_idx())
else:
new_pos_idx = set(first_layout_value_idx) - set(current_layout.position.get_value_idx())
second_layout.number.set_value_level(len(new_pos_idx) - 1)
second_layout.position.set_value_idx(np.array(list(new_pos_idx)))
# the second col
else:
current_layout_value_idx = current_layout.position.get_value_idx()
self.memory.append(current_layout_value_idx)
while True:
second_layout.number.sample()
second_layout.position.sample(second_layout.number.get_value())
# if UNION, not a subset; otherwise not clearly a union
if self.value > 0:
if not (set(current_layout_value_idx) >= set(second_layout.position.get_value_idx())):
break
# if DIFF, not a subset; otherwise no entities left
else:
if not (set(current_layout_value_idx) <= set(second_layout.position.get_value_idx())):
break
pos = second_layout.position.get_value()
del second_layout.children[:]
for i in range(len(pos)):
entity = copy.deepcopy(current_layout.children[0])
entity.name = str(i)
entity.bbox = pos[i]
if not current_layout.uniformity.get_value():
entity.resample()
second_layout.insert(entity)
elif self.attr == "Size":
if len(self.memory) > 0:
first_layout_size_level = self.memory.pop()
if self.value > 0:
new_size_value_level = first_layout_size_level + \
current_layout.children[0].size.get_value_level() + 1
else:
new_size_value_level = first_layout_size_level - \
current_layout.children[0].size.get_value_level() - 1
for entity in second_layout.children:
entity.size.set_value_level(new_size_value_level)
else:
# make sure of value consistency
old_value_level = current_layout.children[0].size.get_value_level()
self.memory.append(old_value_level)
if not current_layout.uniformity.get_value():
for entity in current_layout.children:
entity.size.set_value_level(old_value_level)
if self.value > 0:
size_max_level_orig = sum(current_layout.entity_constraint["Size"]) + 1
new_size_max_level = size_max_level_orig - old_value_level - 1
# deepcopy breaks the link of constraints between Layout and Entity
# Need to reset each attribute
second_layout.entity_constraint["Size"][1] = new_size_max_level
else:
size_min_level_orig = (current_layout.entity_constraint["Size"][0] - 1) / 2
new_size_max_level = old_value_level - size_min_level_orig - 1
second_layout.entity_constraint["Size"] = [size_min_level_orig, new_size_max_level]
new_size_min_level, new_size_max_level = second_layout.entity_constraint["Size"]
the_child = second_layout.children[0]
the_child.reset_constraint("Size", new_size_min_level, new_size_max_level)
the_child.size.sample()
new_size_value_level = the_child.size.get_value_level()
for idx in range(1, len(second_layout.children)):
entity = second_layout.children[idx]
entity.reset_constraint("Size", new_size_min_level, new_size_max_level)
entity.size.set_value_level(new_size_value_level)
elif self.attr == "Color":
self.color_count += 1
if len(self.memory) > 0:
first_layout_color_level = self.memory.pop()
if self.value > 0:
new_color_value_level = first_layout_color_level + \
current_layout.children[0].color.get_value_level()
else:
new_color_value_level = first_layout_color_level - \
current_layout.children[0].color.get_value_level()
for entity in second_layout.children:
entity.color.set_value_level(new_color_value_level)
else:
# Logic here: C_12 and C_22 could not be both 0, otherwise it's impossible to distinguish + and -
# If C_12 == 0, we set an alarm
# Under this alarm, if C_21 == MAX and ADD rule, then resample C_21 to ensure C_22 could be other than 0
# Similarly, if C_21 == 0 and SUB rule, then resample C_21 to ensure C_22 could be other than 0
# Finally, loop until C_22 is not 0
# make sure of value consistency
old_value_level = current_layout.children[0].color.get_value_level()
# the third time you apply this rule and find C_21 == MAX/0 if +/-
reset_current_layout = False
if self.color_count == 3 and self.color_white_alarm:
if self.value > 0 and old_value_level == COLOR_MAX:
old_value_level = current_layout.children[0].color.sample_new()
reset_current_layout = True
if self.value < 0 and old_value_level == COLOR_MIN:
old_value_level = current_layout.children[0].color.sample_new()
reset_current_layout = True
self.memory.append(old_value_level)
if reset_current_layout or not current_layout.uniformity.get_value():
for entity in current_layout.children:
entity.color.set_value_level(old_value_level)
if self.value > 0:
color_max_level_orig = sum(current_layout.entity_constraint["Color"])
new_color_max_level = color_max_level_orig - old_value_level
second_layout.entity_constraint["Color"][1] = new_color_max_level
else:
color_min_level_orig = second_layout.entity_constraint["Color"][0] / 2
new_color_max_level = old_value_level
second_layout.entity_constraint["Color"][:] = [color_min_level_orig, new_color_max_level]
new_color_min_level, new_color_max_level = second_layout.entity_constraint["Color"]
the_child = second_layout.children[0]
the_child.reset_constraint("Color", new_color_min_level, new_color_max_level)
the_child.color.sample()
new_color_value_level = the_child.color.get_value_level()
# the first time you apply this rule and get C_12 == 0
# set the alarm
if self.color_count == 1:
self.color_white_alarm = (new_color_value_level == 0)
if self.color_count == 3 and self.color_white_alarm and new_color_value_level == 0:
new_color_value_level = the_child.color.sample_new()
the_child.color.set_value_level(new_color_value_level)
for idx in range(1, len(second_layout.children)):
entity = second_layout.children[idx]
entity.reset_constraint("Color", new_color_min_level, new_color_max_level)
entity.color.set_value_level(new_color_value_level)
else:
raise ValueError("Unsupported attriubute")
return second_aot
class Distribute_Three(Rule):
"""Ternay operator. Three values across the columns form a fixed set.
"""
def __init__(self, name, attr, param, component_idx):
super(Distribute_Three, self).__init__(name, attr, param, component_idx)
self.value_levels = []
self.count = 0
def apply_rule(self, aot, in_aot=None):
current_layout = aot.children[0].children[self.component_idx].children[0]
if in_aot is None:
in_aot = aot
second_aot = copy.deepcopy(in_aot)
second_layout = second_aot.children[0].children[self.component_idx].children[0]
if self.attr == "Number":
if self.count == 0:
all_value_levels = range(current_layout.layout_constraint["Number"][0],
current_layout.layout_constraint["Number"][1] + 1)
current_value_level = current_layout.number.get_value_level()
idx = all_value_levels.index(current_value_level)
all_value_levels.pop(idx)
three_value_levels = np.random.choice(all_value_levels, 2, False)
three_value_levels = np.insert(three_value_levels, 0, current_value_level)
self.value_levels.append(three_value_levels[[0, 1, 2]])
if np.random.uniform() >= 0.5:
self.value_levels.append(three_value_levels[[1, 2, 0]])
self.value_levels.append(three_value_levels[[2, 0, 1]])
else:
self.value_levels.append(three_value_levels[[2, 0, 1]])
self.value_levels.append(three_value_levels[[1, 2, 0]])
second_layout.number.set_value_level(self.value_levels[0][1])
else:
row, col = divmod(self.count, 2)
if col == 0:
current_layout.number.set_value_level(self.value_levels[row][0])
current_layout.resample()
second_aot = copy.deepcopy(aot)
second_layout = second_aot.children[0].children[self.component_idx].children[0]
second_layout.number.set_value_level(self.value_levels[row][1])
else:
second_layout.number.set_value_level(self.value_levels[row][2])
second_layout.position.sample(second_layout.number.get_value())
pos = second_layout.position.get_value()
del second_layout.children[:]
for i in range(len(pos)):
entity = copy.deepcopy(current_layout.children[0])
entity.name = str(i)
entity.bbox = pos[i]
if not current_layout.uniformity.get_value():
entity.resample()
second_layout.insert(entity)
self.count = (self.count + 1) % 6
elif self.attr == "Position":
if self.count == 0:
# sample new does not change value_level/value_idx
num = current_layout.number.get_value()
pos_0 = current_layout.position.get_value_idx()
pos_1 = current_layout.position.sample_new(num)
pos_2 = current_layout.position.sample_new(num, [pos_1])
three_value_levels = np.array([pos_0, pos_1, pos_2])
self.value_levels.append(three_value_levels[[0, 1, 2]])
if np.random.uniform() >= 0.5:
self.value_levels.append(three_value_levels[[1, 2, 0]])
self.value_levels.append(three_value_levels[[2, 0, 1]])
else:
self.value_levels.append(three_value_levels[[2, 0, 1]])
self.value_levels.append(three_value_levels[[1, 2, 0]])
second_layout.position.set_value_idx(self.value_levels[0][1])
else:
row, col = divmod(self.count, 2)
if col == 0:
current_layout.number.set_value_level(len(self.value_levels[row][0]) - 1)
current_layout.resample()
current_layout.position.set_value_idx(self.value_levels[row][0])
pos = current_layout.position.get_value()
for i in range(len(pos)):
entity = current_layout.children[i]
entity.bbox = pos[i]
second_aot = copy.deepcopy(aot)
second_layout = second_aot.children[0].children[self.component_idx].children[0]
second_layout.position.set_value_idx(self.value_levels[row][1])
else:
second_layout.position.set_value_idx(self.value_levels[row][2])
pos = second_layout.position.get_value()
for i in range(len(pos)):
entity = second_layout.children[i]
entity.bbox = pos[i]
self.count = (self.count + 1) % 6
elif self.attr == "Type":
if self.count == 0:
all_value_levels = range(current_layout.entity_constraint["Type"][0],
current_layout.entity_constraint["Type"][1] + 1)
# if np.random.uniform() >= 0.5 and 0 not in all_value_levels:
# all_value_levels.insert(0, 0)
three_value_levels = np.random.choice(all_value_levels, 3, False)
np.random.shuffle(three_value_levels)
self.value_levels.append(three_value_levels[[0, 1, 2]])
if np.random.uniform() >= 0.5:
self.value_levels.append(three_value_levels[[1, 2, 0]])
self.value_levels.append(three_value_levels[[2, 0, 1]])
else:
self.value_levels.append(three_value_levels[[2, 0, 1]])
self.value_levels.append(three_value_levels[[1, 2, 0]])
for entity in current_layout.children:
entity.type.set_value_level(self.value_levels[0][0])
for entity in second_layout.children:
entity.type.set_value_level(self.value_levels[0][1])
else:
row, col = divmod(self.count, 2)
if col == 0:
value_level = self.value_levels[row][0]
for entity in current_layout.children:
entity.type.set_value_level(value_level)
value_level = self.value_levels[row][1]
for entity in second_layout.children:
entity.type.set_value_level(value_level)
else:
value_level = self.value_levels[row][2]
for entity in second_layout.children:
entity.type.set_value_level(value_level)
self.count = (self.count + 1) % 6
elif self.attr == "Size":
if self.count == 0:
all_value_levels = range(current_layout.entity_constraint["Size"][0],
current_layout.entity_constraint["Size"][1] + 1)
three_value_levels = np.random.choice(all_value_levels, 3, False)
self.value_levels.append(three_value_levels[[0, 1, 2]])
if np.random.uniform() >= 0.5:
self.value_levels.append(three_value_levels[[1, 2, 0]])
self.value_levels.append(three_value_levels[[2, 0, 1]])
else:
self.value_levels.append(three_value_levels[[2, 0, 1]])
self.value_levels.append(three_value_levels[[1, 2, 0]])
for entity in current_layout.children:
entity.size.set_value_level(self.value_levels[0][0])
for entity in second_layout.children:
entity.size.set_value_level(self.value_levels[0][1])
else:
row, col = divmod(self.count, 2)
if col == 0:
value_level = self.value_levels[row][0]
for entity in current_layout.children:
entity.size.set_value_level(value_level)
value_level = self.value_levels[row][1]
for entity in second_layout.children:
entity.size.set_value_level(value_level)
else:
value_level = self.value_levels[row][2]
for entity in second_layout.children:
entity.size.set_value_level(value_level)
self.count = (self.count + 1) % 6
elif self.attr == "Color":
if self.count == 0:
all_value_levels = range(current_layout.entity_constraint["Color"][0],
current_layout.entity_constraint["Color"][1] + 1)
three_value_levels = np.random.choice(all_value_levels, 3, False)
self.value_levels.append(three_value_levels[[0, 1, 2]])
if np.random.uniform() >= 0.5:
self.value_levels.append(three_value_levels[[1, 2, 0]])
self.value_levels.append(three_value_levels[[2, 0, 1]])
else:
self.value_levels.append(three_value_levels[[2, 0, 1]])
self.value_levels.append(three_value_levels[[1, 2, 0]])
for entity in current_layout.children:
entity.color.set_value_level(self.value_levels[0][0])
for entity in second_layout.children:
entity.color.set_value_level(self.value_levels[0][1])
else:
row, col = divmod(self.count, 2)
if col == 0:
value_level = self.value_levels[row][0]
for entity in current_layout.children:
entity.color.set_value_level(value_level)
value_level = self.value_levels[row][1]
for entity in second_layout.children:
entity.color.set_value_level(value_level)
else:
value_level = self.value_levels[row][2]
for entity in second_layout.children:
entity.color.set_value_level(value_level)
self.count = (self.count + 1) % 6
else:
raise ValueError("Unsupported attriubute")
return second_aot
================================================
FILE: src/dataset/__init__.py
================================================
""" RAVEN dataset generation code
Author: Chi Zhang
Data: 05/14/2019
Contact: chi.zhang@ucla.edu
"""
================================================
FILE: src/dataset/api.py
================================================
# -*- coding: utf-8 -*-
import xml.etree.ElementTree as ET
import cv2
import numpy as np
from const import DEFAULT_WIDTH, IMAGE_SIZE
from rendering import render_entity
class Bunch:
"""Dummy class"""
def __init__(self, **kwds):
self.__dict__.update(kwds)
def get_real_bbox(entity_bbox, entity_type, entity_size, entity_angle):
assert entity_type != "none"
center = (int(entity_bbox[1] * IMAGE_SIZE), int(entity_bbox[0] * IMAGE_SIZE))
M = cv2.getRotationMatrix2D(center, entity_angle, 1)
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
delta = DEFAULT_WIDTH * 1.5 / IMAGE_SIZE
if entity_type == "circle":
radius = unit * entity_size
real_bbox = [center[1] * 1.0 / IMAGE_SIZE, center[0] * 1.0 / IMAGE_SIZE, 2 * radius / IMAGE_SIZE + delta, 2 * radius / IMAGE_SIZE + delta]
else:
if entity_type == "triangle":
dl = int(unit * entity_size)
homo_pts = np.array([[center[0], center[1] - dl, 1],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0), 1],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0), 1]],
np.int32)
if entity_type == "square":
dl = int(unit / 2 * np.sqrt(2) * entity_size)
homo_pts = np.array([[center[0] - dl, center[1] - dl, 1],
[center[0] - dl, center[1] + dl, 1],
[center[0] + dl, center[1] + dl, 1],
[center[0] + dl, center[1] - dl, 1]],
np.int32)
if entity_type == "pentagon":
dl = int(unit * entity_size)
homo_pts = np.array([[center[0], center[1] - dl, 1],
[center[0] - int(dl * np.cos(np.pi / 10)), center[1] - int(dl * np.sin(np.pi / 10)), 1],
[center[0] - int(dl * np.sin(np.pi / 5)), center[1] + int(dl * np.cos(np.pi / 5)), 1],
[center[0] + int(dl * np.sin(np.pi / 5)), center[1] + int(dl * np.cos(np.pi / 5)), 1],
[center[0] + int(dl * np.cos(np.pi / 10)), center[1] - int(dl * np.sin(np.pi / 10)), 1]],
np.int32)
if entity_type == "hexagon":
dl = int(unit * entity_size)
homo_pts = np.array([[center[0], center[1] - dl, 1],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] - int(dl / 2.0), 1],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0), 1],
[center[0], center[1] + dl, 1],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0), 1],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] - int(dl / 2.0), 1]],
np.int32)
after_pts = np.dot(M, homo_pts.T)
min_x = min(after_pts[1, :]) / IMAGE_SIZE
max_x = max(after_pts[1, :]) / IMAGE_SIZE
min_y = min(after_pts[0, :]) / IMAGE_SIZE
max_y = max(after_pts[0, :]) / IMAGE_SIZE
real_bbox = [(min_x + max_x) / 2, (min_y + max_y) / 2, max_x - min_x + delta, max_y - min_y + delta]
return list(np.round(real_bbox, 4))
def get_mask(entity_bbox, entity_type, entity_size, entity_angle):
dummy_entity = Bunch()
dummy_entity.bbox = entity_bbox
dummy_entity.type = Bunch(get_value=lambda : entity_type)
dummy_entity.size = Bunch(get_value=lambda : entity_size)
dummy_entity.color = Bunch(get_value=lambda : 0)
dummy_entity.angle = Bunch(get_value=lambda : entity_angle)
mask = render_entity(dummy_entity) / 255
return mask
# ref: https://www.kaggle.com/stainsby/fast-tested-rle
# ref: https://www.kaggle.com/paulorzp/run-length-encode-and-decode
def rle_encode(img):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels = img.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return "[" + ",".join(str(x) for x in runs) + "]"
def rle_decode(mask_rle, shape):
'''
mask_rle: run-length as string formated (start length)
shape: (height,width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle[1:-1].split(",")
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape)
================================================
FILE: src/dataset/build_tree.py
================================================
# -*- coding: utf-8 -*-
from AoT import Component, Layout, Root, Structure
from constraints import (gen_entity_constraint, gen_layout_constraint,
rule_constraint)
def build_center_single():
# Build AoT here
root = Root("Scene")
# Singleton struct
struct = Structure("Singleton")
# Singleton comp
comp = Component("Grid")
# Center_Single layout
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.5, 1, 1)],
num_min=0,
num_max=0)
layout = Layout("Center_Single", layout_constraint, entity_constraint)
comp.insert(layout)
struct.insert(comp)
root.insert(struct)
return root
def build_distribute_four():
# Build AoT here
root = Root("Scene")
# Singleton struct
struct = Structure("Singleton")
# Singleton comp
comp = Component("Grid")
# Distribute_Four
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.25, 0.25, 0.5, 0.5),
(0.25, 0.75, 0.5, 0.5),
(0.75, 0.25, 0.5, 0.5),
(0.75, 0.75, 0.5, 0.5)],
num_min=0,
num_max=3)
layout = Layout("Distribute_Four", layout_constraint, entity_constraint)
comp.insert(layout)
struct.insert(comp)
root.insert(struct)
return root
def build_distribute_nine():
# Build AoT here
root = Root("Scene")
# Singleton struct
struct = Structure("Singleton")
# Singleton comp
comp = Component("Grid")
# Distribute_Nine
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.16, 0.16, 0.33, 0.33),
(0.16, 0.5, 0.33, 0.33),
(0.16, 0.83, 0.33, 0.33),
(0.5, 0.16, 0.33, 0.33),
(0.5, 0.5, 0.33, 0.33),
(0.5, 0.83, 0.33, 0.33),
(0.83, 0.16, 0.33, 0.33),
(0.83, 0.5, 0.33, 0.33),
(0.83, 0.83, 0.33, 0.33)],
num_min=0,
num_max=8)
layout = Layout("Distribute_Nine", layout_constraint, entity_constraint)
comp.insert(layout)
struct.insert(comp)
root.insert(struct)
return root
def build_left_center_single_right_center_single():
# Build AoT here
root = Root("Scene")
# Left-Right Structure
struct = Structure("Left_Right")
# Left Component
comp_left = Component("Left")
# Left_Center_Single
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.25, 0.5, 0.5)],
num_min=0,
num_max=0)
layout = Layout("Left_Center_Single", layout_constraint, entity_constraint)
comp_left.insert(layout)
# Right Component
comp_right = Component("Right")
# Right_Center_Single
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.75, 0.5, 0.5)],
num_min=0,
num_max=0)
layout = Layout("Right_Center_Single", layout_constraint, entity_constraint)
comp_right.insert(layout)
struct.insert(comp_left)
struct.insert(comp_right)
root.insert(struct)
return root
def build_up_center_single_down_center_single():
# Build AoT here
root = Root("Scene")
# Up-Down Structure
struct = Structure("Up_Down")
# Left Component
comp_up = Component("Up")
# Up_Center_Single
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.25, 0.5, 0.5, 0.5)],
num_min=0,
num_max=0)
layout = Layout("Up_Center_Single", layout_constraint, entity_constraint)
comp_up.insert(layout)
# Down Component
comp_down = Component("Down")
# Down_Center_Single
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.75, 0.5, 0.5, 0.5)],
num_min=0,
num_max=0)
layout = Layout("Down_Center_Single", layout_constraint, entity_constraint)
comp_down.insert(layout)
struct.insert(comp_up)
struct.insert(comp_down)
root.insert(struct)
return root
def build_in_center_single_out_center_single():
# Build AoT here
root = Root("Scene")
# In-Out Structure
struct = Structure("Out_In")
# Out Component
comp_out = Component("Out")
# Out_One
entity_constraint = gen_entity_constraint(type_min=1,
size_min=3,
color_max=0)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.5, 1, 1)],
num_min=0,
num_max=0)
layout = Layout("Out_Center_Single", layout_constraint, entity_constraint)
comp_out.insert(layout)
# In Component
comp_in = Component("In")
# In_Center_Single
entity_constraint = gen_entity_constraint(type_min=1)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.5, 0.33, 0.33)],
num_min=0,
num_max=0)
layout = Layout("In_Center_Single", layout_constraint, entity_constraint)
comp_in.insert(layout)
struct.insert(comp_out)
struct.insert(comp_in)
root.insert(struct)
return root
def build_in_distribute_four_out_center_single():
# Build AoT here
root = Root("Scene")
# In-Out Structure
struct = Structure("Out_In")
# Out Component
comp_out = Component("Out")
# Out_One
entity_constraint = gen_entity_constraint(type_min=1,
size_min=3,
color_max=0)
layout_constraint = gen_layout_constraint("planar",
[(0.5, 0.5, 1, 1)],
num_min=0,
num_max=0)
layout = Layout("Out_Center_Single", layout_constraint, entity_constraint)
comp_out.insert(layout)
# In Component
comp_in = Component("In")
# In_Four
entity_constraint = gen_entity_constraint(type_min=1, size_min=2)
layout_constraint = gen_layout_constraint("planar",
[(0.42, 0.42, 0.15, 0.15),
(0.42, 0.58, 0.15, 0.15),
(0.58, 0.42, 0.15, 0.15),
(0.58, 0.58, 0.15, 0.15)],
num_min=0,
num_max=3)
layout = Layout("In_Distribute_Four", layout_constraint, entity_constraint)
comp_in.insert(layout)
struct.insert(comp_out)
struct.insert(comp_in)
root.insert(struct)
return root
================================================
FILE: src/dataset/const.py
================================================
# -*- coding: utf-8 -*-
# Maximum number of components in a RPM
MAX_COMPONENTS = 2
# Canvas parameters
IMAGE_SIZE = 160
CENTER = (IMAGE_SIZE / 2, IMAGE_SIZE / 2)
DEFAULT_RADIUS = IMAGE_SIZE / 4
DEFAULT_WIDTH = 2
# Attribute parameters
# Number
NUM_VALUES = [1, 2, 3, 4, 5, 6, 7, 8, 9]
NUM_MIN = 0
NUM_MAX = len(NUM_VALUES) - 1
# Uniformity
UNI_VALUES = [False, False, False, True]
UNI_MIN = 0
UNI_MAX = len(UNI_VALUES) - 1
# Type
TYPE_VALUES = ["none", "triangle", "square", "pentagon", "hexagon", "circle"]
TYPE_MIN = 0
TYPE_MAX = len(TYPE_VALUES) - 1
# Size
SIZE_VALUES = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
SIZE_MIN = 0
SIZE_MAX = len(SIZE_VALUES) - 1
# Color
COLOR_VALUES = [255, 224, 196, 168, 140, 112, 84, 56, 28, 0]
COLOR_MIN = 0
COLOR_MAX = len(COLOR_VALUES) - 1
# Angle: self-rotation
ANGLE_VALUES = [-135, -90, -45, 0, 45, 90, 135, 180]
ANGLE_MIN = 0
ANGLE_MAX = len(ANGLE_VALUES) - 1
META_TARGET_FORMAT = ["Constant", "Progression", "Arithmetic", "Distribute_Three", "Number", "Position", "Type", "Size", "Color"]
META_STRUCTURE_FORMAT = ["Singleton", "Left_Right", "Up_Down", "Out_In", "Left", "Right", "Up", "Down", "Out", "In", "Grid", "Center_Single", "Distribute_Four", "Distribute_Nine", "Left_Center_Single", "Right_Center_Single", "Up_Center_Single", "Down_Center_Single", "Out_Center_Single", "In_Center_Single", "In_Distribute_Four"]
# Rule, Attr, Param
# The design encodes rule priority order: Number/Position always comes first
# Number and Position could not both be sampled
# Progression on Number: Number on each Panel +1/2 or -1/2
# Progression on Position: Entities on each Panel roll over the layout
# Arithmetic on Number: Numeber on the third Panel = Number on first +/- Number on second (1 for + and -1 for -)
# Arithmetic on Position: 1 for SET_UNION and -1 for SET_DIFF
# Distribute_Three on Number: Three numbers through each row
# Distribute_Three on Position: Three positions (same number) through each row
# Constant on Number/Position: Nothing changes
# Progression on Type: Type progression defined as the number of edges on each entity (Triangle, Square, Pentagon, Hexagon, Circle)
# Distribute_Three on Type: Three types through each row
# Constant on Type: Nothing changes
# Progression on Size: Size on each entity +1/2 or -1/2
# Arithmetic on Size: Size on the third Panel = Size on the first +/- Size on the second (1 for + and -1 for -)
# Distribute_Three on Size: Three sizes through each row
# Constant on Size: Nothing changes
# Progression on Color: Color +1/2 or -1/2
# Arithmetic on Color: Color on the third Panel = Color on the first +/- Color on the second (1 for + and -1 for -)
# Distribute_Three on Color: Three colors through each row
# Constant on Color: Nothing changes
# Note that all rules on Type, Size and Color enforce value consistency in a panel
RULE_ATTR = [[["Progression", "Number", [-2, -1, 1, 2]],
["Progression", "Position", [-2, -1, 1, 2]],
["Arithmetic", "Number", [1, -1]],
["Arithmetic", "Position", [1, -1]],
["Distribute_Three", "Number", None],
["Distribute_Three", "Position", None],
["Constant", "Number/Position", None]],
[["Progression", "Type", [-2, -1, 1, 2]],
["Distribute_Three", "Type", None],
["Constant", "Type", None]],
[["Progression", "Size", [-2, -1, 1, 2]],
["Arithmetic", "Size", [1, -1]],
["Distribute_Three", "Size", None],
["Constant", "Size", None]],
[["Progression", "Color", [-2, -1, 1, 2]],
["Arithmetic", "Color", [1, -1]],
["Distribute_Three", "Color", None],
["Constant", "Color", None]]]
================================================
FILE: src/dataset/constraints.py
================================================
# -*- coding: utf-8 -*-
from const import (ANGLE_MAX, ANGLE_MIN, COLOR_MAX, COLOR_MIN, NUM_MAX,
NUM_MIN, SIZE_MAX, SIZE_MIN, TYPE_MAX, TYPE_MIN, UNI_MAX,
UNI_MIN)
def gen_layout_constraint(pos_type, pos_list,
num_min=NUM_MIN, num_max=NUM_MAX,
uni_min=UNI_MIN, uni_max=UNI_MAX):
constraint = {"Number": [num_min, num_max],
"Position": [pos_type, pos_list[:]],
"Uni": [uni_min, uni_max]}
return constraint
def gen_entity_constraint(type_min=TYPE_MIN, type_max=TYPE_MAX,
size_min=SIZE_MIN, size_max=SIZE_MAX,
color_min=COLOR_MIN, color_max=COLOR_MAX,
angle_min=ANGLE_MIN, angle_max=ANGLE_MAX):
constraint = {"Type": [type_min, type_max],
"Size": [size_min, size_max],
"Color": [color_min, color_max],
"Angle": [angle_min, angle_max]}
return constraint
def rule_constraint(rule_list, num_min, num_max,
uni_min, uni_max,
type_min, type_max,
size_min, size_max,
color_min, color_max):
"""Generate constraints given the rules and the original constraints
from layout and entity. Note that each attribute has at most one rule
applied on it.
Arguments:
rule_list(ordered list of Rule): all rules applied to this layout
others (int): boundary levels for each attribute in a layout; note that
num_max + 1 == len(layout.position.values)
Returns:
layout_constraint(dict): a new layout constraint
entity_constraint(dict): a new entity constraint
"""
assert len(rule_list) > 0
for rule in rule_list:
if rule.name == "Progression":
# rule.value: add/sub how many levels
if rule.attr == "Number":
if rule.value > 0:
num_max = num_max - rule.value * 2
else:
num_min = num_min - rule.value * 2
if rule.attr == "Position":
# Progression here means moving in Layout slots in order
abs_value = abs(rule.value)
num_max = num_max - abs_value * 2
if rule.attr == "Type":
if rule.value > 0:
type_max = type_max - rule.value * 2
else:
type_min = type_min - rule.value * 2
if rule.attr == "Size":
if rule.value > 0:
size_max = size_max - rule.value * 2
else:
size_min = size_min - rule.value * 2
if rule.attr == "Color":
if rule.value > 0:
color_max = color_max - rule.value * 2
else:
color_min = color_min - rule.value * 2
if rule.name == "Arithmetic":
# rule.value > 0 if add col_0 + col_1
# rule.value < 0 if sub col_0 - col_1
if rule.attr == "Number":
if rule.value > 0:
num_max = num_max - num_min - 1
else:
num_min = 2 * num_min + 1
if rule.attr == "Position":
# SET_UNION
# at least two position configurations
if rule.value > 0:
num_max = num_max - 1
# num_min makes sure of overlap
# at least two configurations
# SET_DIFF
else:
num_min = (num_max + 2) / 2 - 1
num_max = num_max - 1
if rule.attr == "Size":
if rule.value > 0:
size_max = size_max - size_min - 1
else:
size_min = 2 * size_min + 1
if rule.attr == "Color":
# at least two different colors
if color_max - color_min < 1:
color_max = color_min - 1
else:
if rule.value > 0:
color_max = color_max - color_min
if rule.value < 0:
color_min = 2 * color_min
if rule.name == "Distribute_Three":
# if less than 3 values, invalidate it
if rule.attr == "Number":
if num_max - num_min + 1 < 3:
num_max = num_min - 1
if rule.attr == "Position":
# max number allowed in the layout should be >= 3
if num_max + 1 < 3:
num_max = num_min - 1
# num_max + 1 == len(layout.position.values)
# C_{num_max + 1}^{num_value} >= 3
# C_{num_max + 1} = num_max + 1 >= 3
# hence only need to constrain num_max: num_max = num_max - 1
# Check Yang Hui’s Triangle (Pascal's Triangle): https://www.varsitytutors.com/hotmath/hotmath_help/topics/yang-huis-triangle
else:
num_max = num_max - 1
if rule.attr == "Type":
if type_max - type_min + 1 < 3:
type_max = type_min - 1
if rule.attr == "Size":
if size_max - size_min + 1 < 3:
size_max = size_min - 1
if rule.attr == "Color":
if color_max - color_min + 1 < 3:
color_max = color_min - 1
return gen_layout_constraint(None, [],
num_min, num_max,
uni_min, uni_max), \
gen_entity_constraint(type_min, type_max,
size_min, size_max,
color_min, color_max)
================================================
FILE: src/dataset/main.py
================================================
# -*- coding: utf-8 -*-
import argparse
import copy
import os
import random
import sys
import numpy as np
from tqdm import trange
from build_tree import (build_center_single, build_distribute_four,
build_distribute_nine,
build_in_center_single_out_center_single,
build_in_distribute_four_out_center_single,
build_left_center_single_right_center_single,
build_up_center_single_down_center_single)
from const import IMAGE_SIZE, RULE_ATTR
from rendering import (generate_matrix, generate_matrix_answer, imsave, imshow,
render_panel)
from Rule import Rule_Wrapper
from sampling import sample_attr, sample_attr_avail, sample_rules
from serialize import dom_problem, serialize_aot, serialize_rules
from solver import solve
def merge_component(dst_aot, src_aot, component_idx):
src_component = src_aot.children[0].children[component_idx]
dst_aot.children[0].children[component_idx] = src_component
def fuse(args, all_configs):
random.seed(args.seed)
np.random.seed(args.seed)
acc = 0
for k in trange(args.num_samples * len(all_configs)):
if k < args.num_samples * (1 - args.val - args.test):
set_name = "train"
elif k < args.num_samples * (1 - args.test):
set_name = "val"
else:
set_name = "test"
tree_name = random.choice(all_configs.keys())
root = all_configs[tree_name]
while True:
rule_groups = sample_rules()
new_root = root.prune(rule_groups)
if new_root is not None:
break
start_node = new_root.sample()
row_1_1 = copy.deepcopy(start_node)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_1_2 = rule_num_pos.apply_rule(row_1_1)
row_1_3 = rule_num_pos.apply_rule(row_1_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_1_2 = rule.apply_rule(row_1_1, row_1_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_1_3 = rule.apply_rule(row_1_2, row_1_3)
if l == 0:
to_merge = [row_1_1, row_1_2, row_1_3]
else:
merge_component(to_merge[1], row_1_2, l)
merge_component(to_merge[2], row_1_3, l)
row_1_1, row_1_2, row_1_3 = to_merge
row_2_1 = copy.deepcopy(start_node)
row_2_1.resample(True)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_2_2 = rule_num_pos.apply_rule(row_2_1)
row_2_3 = rule_num_pos.apply_rule(row_2_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_2_2 = rule.apply_rule(row_2_1, row_2_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_2_3 = rule.apply_rule(row_2_2, row_2_3)
if l == 0:
to_merge = [row_2_1, row_2_2, row_2_3]
else:
merge_component(to_merge[1], row_2_2, l)
merge_component(to_merge[2], row_2_3, l)
row_2_1, row_2_2, row_2_3 = to_merge
row_3_1 = copy.deepcopy(start_node)
row_3_1.resample(True)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_3_2 = rule_num_pos.apply_rule(row_3_1)
row_3_3 = rule_num_pos.apply_rule(row_3_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_3_2 = rule.apply_rule(row_3_1, row_3_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_3_3 = rule.apply_rule(row_3_2, row_3_3)
if l == 0:
to_merge = [row_3_1, row_3_2, row_3_3]
else:
merge_component(to_merge[1], row_3_2, l)
merge_component(to_merge[2], row_3_3, l)
row_3_1, row_3_2, row_3_3 = to_merge
imgs = [render_panel(row_1_1),
render_panel(row_1_2),
render_panel(row_1_3),
render_panel(row_2_1),
render_panel(row_2_2),
render_panel(row_2_3),
render_panel(row_3_1),
render_panel(row_3_2),
np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)]
context = [row_1_1, row_1_2, row_1_3, row_2_1, row_2_2, row_2_3, row_3_1, row_3_2]
modifiable_attr = sample_attr_avail(rule_groups, row_3_3)
answer_AoT = copy.deepcopy(row_3_3)
candidates = [answer_AoT]
for j in range(7):
component_idx, attr_name, min_level, max_level = sample_attr(modifiable_attr)
answer_j = copy.deepcopy(answer_AoT)
answer_j.sample_new(component_idx, attr_name, min_level, max_level, answer_AoT)
candidates.append(answer_j)
random.shuffle(candidates)
answers = []
for candidate in candidates:
answers.append(render_panel(candidate))
# imsave(generate_matrix_answer(imgs + answers), "./experiments/fuse/{}.jpg".format(k))
image = imgs[0:8] + answers
target = candidates.index(answer_AoT)
predicted = solve(rule_groups, context, candidates)
meta_matrix, meta_target = serialize_rules(rule_groups)
structure, meta_structure = serialize_aot(start_node)
np.savez("{}/RAVEN_{}_{}.npz".format(args.save_dir, k, set_name), image=image,
target=target,
predict=predicted,
meta_matrix=meta_matrix,
meta_target=meta_target,
structure=structure,
meta_structure=meta_structure)
with open("{}/RAVEN_{}_{}.xml".format(args.save_dir, k, set_name), "w") as f:
dom = dom_problem(context + candidates, rule_groups)
f.write(dom)
if target == predicted:
acc += 1
print "Accuracy: {}".format(float(acc) / (args.num_samples * len(all_configs)))
def separate(args, all_configs):
random.seed(args.seed)
np.random.seed(args.seed)
for key in all_configs.keys():
acc = 0
for k in trange(args.num_samples):
count_num = k % 10
if count_num < (10 - args.val - args.test):
set_name = "train"
elif count_num < (10 - args.test):
set_name = "val"
else:
set_name = "test"
root = all_configs[key]
while True:
rule_groups = sample_rules()
new_root = root.prune(rule_groups)
if new_root is not None:
break
start_node = new_root.sample()
row_1_1 = copy.deepcopy(start_node)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_1_2 = rule_num_pos.apply_rule(row_1_1)
row_1_3 = rule_num_pos.apply_rule(row_1_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_1_2 = rule.apply_rule(row_1_1, row_1_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_1_3 = rule.apply_rule(row_1_2, row_1_3)
if l == 0:
to_merge = [row_1_1, row_1_2, row_1_3]
else:
merge_component(to_merge[1], row_1_2, l)
merge_component(to_merge[2], row_1_3, l)
row_1_1, row_1_2, row_1_3 = to_merge
row_2_1 = copy.deepcopy(start_node)
row_2_1.resample(True)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_2_2 = rule_num_pos.apply_rule(row_2_1)
row_2_3 = rule_num_pos.apply_rule(row_2_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_2_2 = rule.apply_rule(row_2_1, row_2_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_2_3 = rule.apply_rule(row_2_2, row_2_3)
if l == 0:
to_merge = [row_2_1, row_2_2, row_2_3]
else:
merge_component(to_merge[1], row_2_2, l)
merge_component(to_merge[2], row_2_3, l)
row_2_1, row_2_2, row_2_3 = to_merge
row_3_1 = copy.deepcopy(start_node)
row_3_1.resample(True)
for l in range(len(rule_groups)):
rule_group = rule_groups[l]
rule_num_pos = rule_group[0]
row_3_2 = rule_num_pos.apply_rule(row_3_1)
row_3_3 = rule_num_pos.apply_rule(row_3_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_3_2 = rule.apply_rule(row_3_1, row_3_2)
for i in range(1, len(rule_group)):
rule = rule_group[i]
row_3_3 = rule.apply_rule(row_3_2, row_3_3)
if l == 0:
to_merge = [row_3_1, row_3_2, row_3_3]
else:
merge_component(to_merge[1], row_3_2, l)
merge_component(to_merge[2], row_3_3, l)
row_3_1, row_3_2, row_3_3 = to_merge
imgs = [render_panel(row_1_1),
render_panel(row_1_2),
render_panel(row_1_3),
render_panel(row_2_1),
render_panel(row_2_2),
render_panel(row_2_3),
render_panel(row_3_1),
render_panel(row_3_2),
np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)]
context = [row_1_1, row_1_2, row_1_3, row_2_1, row_2_2, row_2_3, row_3_1, row_3_2]
modifiable_attr = sample_attr_avail(rule_groups, row_3_3)
answer_AoT = copy.deepcopy(row_3_3)
candidates = [answer_AoT]
for j in range(7):
component_idx, attr_name, min_level, max_level = sample_attr(modifiable_attr)
answer_j = copy.deepcopy(answer_AoT)
answer_j.sample_new(component_idx, attr_name, min_level, max_level, answer_AoT)
candidates.append(answer_j)
random.shuffle(candidates)
answers = []
for candidate in candidates:
answers.append(render_panel(candidate))
# imsave(generate_matrix_answer(imgs + answers), "./experiments/{}/{}.jpg".format(key, k))
image = imgs[0:8] + answers
target = candidates.index(answer_AoT)
predicted = solve(rule_groups, context, candidates)
meta_matrix, meta_target = serialize_rules(rule_groups)
structure, meta_structure = serialize_aot(start_node)
np.savez("{}/{}/RAVEN_{}_{}.npz".format(args.save_dir, key, k, set_name), image=image,
target=target,
predict=predicted,
meta_matrix=meta_matrix,
meta_target=meta_target,
structure=structure,
meta_structure=meta_structure)
with open("{}/{}/RAVEN_{}_{}.xml".format(args.save_dir, key, k, set_name), "w") as f:
dom = dom_problem(context + candidates, rule_groups)
f.write(dom)
if target == predicted:
acc += 1
print "Accuracy of {}: {}".format(key, float(acc) / args.num_samples)
def main():
main_arg_parser = argparse.ArgumentParser(description="parser for RAVEN")
main_arg_parser.add_argument("--num-samples", type=int, default=20000,
help="number of samples for each component configuration")
main_arg_parser.add_argument("--save-dir", type=str, default="~/Datasets/",
help="path to folder where the generated dataset will be saved.")
main_arg_parser.add_argument("--seed", type=int, default=1234,
help="random seed for dataset generation")
main_arg_parser.add_argument("--fuse", type=int, default=0,
help="whether to fuse different configurations")
main_arg_parser.add_argument("--val", type=float, default=2,
help="the proportion of the size of validation set")
main_arg_parser.add_argument("--test", type=float, default=2,
help="the proportion of the size of test set")
args = main_arg_parser.parse_args()
all_configs = {"center_single": build_center_single(),
"distribute_four": build_distribute_four(),
"distribute_nine": build_distribute_nine(),
"left_center_single_right_center_single": build_left_center_single_right_center_single(),
"up_center_single_down_center_single": build_up_center_single_down_center_single(),
"in_center_single_out_center_single": build_in_center_single_out_center_single(),
"in_distribute_four_out_center_single": build_in_distribute_four_out_center_single()}
if not os.path.exists(args.save_dir):
os.mkdir(args.save_dir)
if args.fuse:
if not os.path.exists(os.path.join(args.save_dir, "fuse")):
os.mkdir(os.path.join(args.save_dir, "fuse"))
fuse(args, all_configs)
else:
for key in all_configs.keys():
if not os.path.exists(os.path.join(args.save_dir, key)):
os.mkdir(os.path.join(args.save_dir, key))
separate(args, all_configs)
if __name__ == "__main__":
main()
================================================
FILE: src/dataset/rendering.py
================================================
# -*- coding: utf-8 -*-
import cv2
import numpy as np
from PIL import Image
from AoT import Root
from const import CENTER, DEFAULT_WIDTH, IMAGE_SIZE
def imshow(array):
image = Image.fromarray(array)
image.show()
def imsave(array, filepath):
image = Image.fromarray(array)
image.save(filepath)
def generate_matrix(array_list):
# row-major array_list
assert len(array_list) <= 9
img_grid = np.zeros((IMAGE_SIZE * 3, IMAGE_SIZE * 3), np.uint8)
for idx in range(len(array_list)):
i, j = divmod(idx, 3)
img_grid[i * IMAGE_SIZE:(i + 1) * IMAGE_SIZE, j * IMAGE_SIZE:(j + 1) * IMAGE_SIZE] = array_list[idx]
# draw grid
for x in [0.33, 0.67]:
img_grid[int(x * IMAGE_SIZE * 3) - 1:int(x * IMAGE_SIZE * 3) + 1, :] = 0
for y in [0.33, 0.67]:
img_grid[:, int(y * IMAGE_SIZE * 3) - 1:int(y * IMAGE_SIZE * 3) + 1] = 0
return img_grid
def generate_answers(array_list):
assert len(array_list) <= 8
img_grid = np.zeros((IMAGE_SIZE * 2, IMAGE_SIZE * 4), np.uint8)
for idx in range(len(array_list)):
i, j = divmod(idx, 4)
img_grid[i * IMAGE_SIZE:(i + 1) * IMAGE_SIZE, j * IMAGE_SIZE:(j + 1) * IMAGE_SIZE] = array_list[idx]
# draw grid
for x in [0.5]:
img_grid[int(x * IMAGE_SIZE * 2) - 1:int(x * IMAGE_SIZE * 2) + 1, :] = 0
for y in [0.25, 0.5, 0.75]:
img_grid[:, int(y * IMAGE_SIZE * 4) - 1:int(y * IMAGE_SIZE * 4) + 1] = 0
return img_grid
def generate_matrix_answer(array_list):
# row-major array_list
assert len(array_list) <= 18
img_grid = np.zeros((IMAGE_SIZE * 6, IMAGE_SIZE * 3), np.uint8)
for idx in range(len(array_list)):
i, j = divmod(idx, 3)
img_grid[i * IMAGE_SIZE:(i + 1) * IMAGE_SIZE, j * IMAGE_SIZE:(j + 1) * IMAGE_SIZE] = array_list[idx]
# draw grid
for x in [0.33, 0.67, 1.00, 1.33, 1.67]:
img_grid[int(x * IMAGE_SIZE * 3), :] = 0
for y in [0.33, 0.67]:
img_grid[:, int(y * IMAGE_SIZE * 3)] = 0
return img_grid
def merge_matrix_answer(matrix, answer):
matrix_image = generate_matrix(matrix)
answer_image = generate_answers(answer)
img_grid = np.ones((IMAGE_SIZE * 5 + 20, IMAGE_SIZE * 4), np.uint8) * 255
img_grid[:IMAGE_SIZE * 3, int(0.5 * IMAGE_SIZE):int(3.5 * IMAGE_SIZE)] = matrix_image
img_grid[-(IMAGE_SIZE * 2):, :] = answer_image
return img_grid
def render_panel(root):
# Decompose the panel into a structure and its entities
assert isinstance(root, Root)
canvas = np.ones((IMAGE_SIZE, IMAGE_SIZE), np.uint8) * 255
structure, entities = root.prepare()
structure_img = render_structure(structure)
background = np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)
# note left components entities are in the lower layer
for entity in entities:
entity_img = render_entity(entity)
background = layer_add(background, entity_img)
background = layer_add(background, structure_img)
return canvas - background
def render_structure(structure_name):
ret = None
if structure_name == "Left_Right":
ret = np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)
ret[:, int(0.5 * IMAGE_SIZE)] = 255.0
elif structure_name == "Up_Down":
ret = np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)
ret[int(0.5 * IMAGE_SIZE), :] = 255.0
else:
ret = np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)
return ret
def render_entity(entity):
entity_bbox = entity.bbox
entity_type = entity.type.get_value()
entity_size = entity.size.get_value()
entity_color = entity.color.get_value()
entity_angle = entity.angle.get_value()
img = np.zeros((IMAGE_SIZE, IMAGE_SIZE), np.uint8)
# planar position: [x, y, w, h]
# angular position: [x, y, w, h, x_c, y_c, omega]
# center: (columns, rows)
center = (int(entity_bbox[1] * IMAGE_SIZE), int(entity_bbox[0] * IMAGE_SIZE))
if entity_type == "triangle":
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
dl = int(unit * entity_size)
pts = np.array([[center[0], center[1] - dl],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0)],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0)]],
np.int32)
pts = pts.reshape((-1, 1, 2))
color = 255 - entity_color
width = DEFAULT_WIDTH
draw_triangle(img, pts, color, width)
elif entity_type == "square":
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
dl = int(unit / 2 * np.sqrt(2) * entity_size)
pt1 = (center[0] - dl, center[1] - dl)
pt2 = (center[0] + dl, center[1] + dl)
color = 255 - entity_color
width = DEFAULT_WIDTH
draw_square(img, pt1, pt2, color, width)
elif entity_type == "pentagon":
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
dl = int(unit * entity_size)
pts = np.array([[center[0], center[1] - dl],
[center[0] - int(dl * np.cos(np.pi / 10)), center[1] - int(dl * np.sin(np.pi / 10))],
[center[0] - int(dl * np.sin(np.pi / 5)), center[1] + int(dl * np.cos(np.pi / 5))],
[center[0] + int(dl * np.sin(np.pi / 5)), center[1] + int(dl * np.cos(np.pi / 5))],
[center[0] + int(dl * np.cos(np.pi / 10)), center[1] - int(dl * np.sin(np.pi / 10))]],
np.int32)
pts = pts.reshape((-1, 1, 2))
color = 255 - entity_color
width = DEFAULT_WIDTH
draw_pentagon(img, pts, color, width)
elif entity_type == "hexagon":
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
dl = int(unit * entity_size)
pts = np.array([[center[0], center[1] - dl],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] - int(dl / 2.0)],
[center[0] - int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0)],
[center[0], center[1] + dl],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] + int(dl / 2.0)],
[center[0] + int(dl / 2.0 * np.sqrt(3)), center[1] - int(dl / 2.0)]],
np.int32)
pts = pts.reshape((-1, 1, 2))
color = 255 - entity_color
width = DEFAULT_WIDTH
draw_hexagon(img, pts, color, width)
elif entity_type == "circle":
# Minus because of the way we show the image. See: render_panel's return
color = 255 - entity_color
unit = min(entity_bbox[2], entity_bbox[3]) * IMAGE_SIZE / 2
radius = int(unit * entity_size)
width = DEFAULT_WIDTH
draw_circle(img, center, radius, color, width)
elif entity_type == "none":
pass
# angular
if len(entity_bbox) > 4:
# [x, y, w, h, x_c, y_c, omega]
entity_angle = entity_bbox[6]
center = (int(entity_bbox[5] * IMAGE_SIZE), int(entity_bbox[4] * IMAGE_SIZE))
img = rotate(img, entity_angle, center=center)
# planar
else:
img = rotate(img, entity_angle, center=center)
# img = shift(img, *entity_position)
return img
def shift(img, dx, dy):
M = np.array([[1, 0, dx], [0, 1, dy]], np.float32)
img = cv2.warpAffine(img, M, (IMAGE_SIZE, IMAGE_SIZE), flags=cv2.INTER_LINEAR)
return img
def rotate(img, angle, center=CENTER):
M = cv2.getRotationMatrix2D(center, angle, 1)
img = cv2.warpAffine(img, M, (IMAGE_SIZE, IMAGE_SIZE), flags=cv2.INTER_LINEAR)
return img
def scale(img, tx, ty, center=CENTER):
M = np.array([[tx, 0, center[0] * (1 - tx)], [0, ty, center[1] * (1 - ty)]], np.float32)
img = cv2.warpAffine(img, M, (IMAGE_SIZE, IMAGE_SIZE), flags=cv2.INTER_LINEAR)
return img
def layer_add(lower_layer_np, higher_layer_np):
# higher_layer_np is superimposed on lower_layer_np
# new_np = lower_layer_np.copy()
# lower_layer_np is modified
lower_layer_np[higher_layer_np > 0] = 0
return lower_layer_np + higher_layer_np
# Draw primitives
def draw_triangle(img, pts, color, width):
# if filled
if color != 0:
# fill the interior
cv2.fillConvexPoly(img, pts, color)
# draw the edge
cv2.polylines(img, [pts], True, 255, width)
# if not filled
else:
cv2.polylines(img, [pts], True, 255, width)
def draw_square(img, pt1, pt2, color, width):
# if filled
if color != 0:
# fill the interior
cv2.rectangle(img,
pt1,
pt2,
color,
-1)
# draw the edge
cv2.rectangle(img,
pt1,
pt2,
255,
width)
# if not filled
else:
cv2.rectangle(img,
pt1,
pt2,
255,
width)
def draw_pentagon(img, pts, color, width):
# if filled
if color != 0:
# fill the interior
cv2.fillConvexPoly(img, pts, color)
# draw the edge
cv2.polylines(img, [pts], True, 255, width)
# if not filled
else:
cv2.polylines(img, [pts], True, 255, width)
def draw_hexagon(img, pts, color, width):
# if filled
if color != 0:
# fill the interior
cv2.fillConvexPoly(img, pts, color)
# draw the edge
cv2.polylines(img, [pts], True, 255, width)
# if not filled
else:
cv2.polylines(img, [pts], True, 255, width)
def draw_circle(img, center, radius, color, width):
# if filled
if color != 0:
# fill the interior
cv2.circle(img,
center,
radius,
color,
-1)
# draw the edge
cv2.circle(img,
center,
radius,
255,
width)
# if not filled
else:
cv2.circle(img,
center,
radius,
255,
width)
================================================
FILE: src/dataset/sampling.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
from scipy.misc import comb
from const import MAX_COMPONENTS, RULE_ATTR
from Rule import Rule_Wrapper
def sample_rules():
"""First sample # components; for each component, sample a rule on each attribute.
"""
num_components = np.random.randint(1, MAX_COMPONENTS + 1)
all_rules = []
for i in range(num_components):
all_rules_component = []
for j in range(len(RULE_ATTR)):
idx = np.random.choice(len(RULE_ATTR[j]))
name_attr_param = RULE_ATTR[j][idx]
all_rules_component.append(Rule_Wrapper(name_attr_param[0], name_attr_param[1], name_attr_param[2], component_idx=i))
all_rules.append(all_rules_component)
return all_rules
# pay attention to Position Arithmetic, new entities (resample)
def sample_attr_avail(rule_groups, row_3_3):
"""Sample available attributes whose values could be modified.
Arguments:
rule_groups(list of list of Rule): a list of rules to apply to the component
row_3_3(AoTNode): the answer AoT
Returns:
ret(list of list): [component_idx, attr, available_times, constraints]
"""
ret = []
for i in range(len(rule_groups)):
rule_group = rule_groups[i]
start_node_layout = row_3_3.children[0].children[i].children[0]
row_3_3_layout = row_3_3.children[0].children[i].children[0]
uni = row_3_3_layout.uniformity.get_value()
# Number/Position
# If Rule on Number: Only change Number
# If Rule on Position: Both Number and Position could be changed
rule = rule_group[0]
num = row_3_3_layout.number.get_value()
most_num = len(start_node_layout.position.values)
if rule.attr == "Number":
num_times = 0
min_level = start_node_layout.orig_layout_constraint["Number"][0]
max_level = start_node_layout.orig_layout_constraint["Number"][1]
for k in range(min_level, max_level + 1):
if k + 1 != num:
num_times += comb(most_num, k + 1)
if num_times > 0:
ret.append([i, "Number", num_times, min_level, max_level])
# Constant or on Position
else:
num_times = 0
min_level = start_node_layout.orig_layout_constraint["Number"][0]
max_level = start_node_layout.orig_layout_constraint["Number"][1]
for k in range(min_level, max_level + 1):
if k + 1 != num:
num_times += comb(most_num, k + 1)
if num_times > 0:
ret.append([i, "Number", num_times, min_level, max_level])
pos_times = comb(most_num, row_3_3_layout.number.get_value())
pos_times -= 1
if pos_times > 0:
ret.append([i, "Position", pos_times, None, None])
# Type, Size, Color
for j in range(1, len(rule_group)):
rule = rule_group[j]
rule_attr = rule.attr
min_level = start_node_layout.orig_entity_constraint[rule_attr][0]
max_level = start_node_layout.orig_entity_constraint[rule_attr][1]
if rule.name == "Constant":
if uni or rule_group[0].name == "Constant" or \
(rule_group[0].attr == "Position" and
(rule_group[0].name == "Progression" or rule_group[0].name == "Distribute_Three")):
times = max_level - min_level + 1
times = times - 1
if times > 0:
ret.append([i, rule_attr, times, min_level, max_level])
else:
times = max_level - min_level + 1
times = times - 1
if times > 0:
ret.append([i, rule_attr, times, min_level, max_level])
return ret
def sample_attr(attrs_list):
"""Given the attr_avail list, sample one attribute to modify the value.
If the available times becomes zero, delete it.
Arguments:
attrs_list(list of list): a flat component of available attributes
to change the values; consisting of different component indexes
"""
attr_idx = np.random.choice(len(attrs_list))
component_idx, attr_name, _, min_level, max_level = attrs_list[attr_idx]
attrs_list[attr_idx][2] -= 1
if attrs_list[attr_idx][2] == 0:
del attrs_list[attr_idx]
return component_idx, attr_name, min_level, max_level
================================================
FILE: src/dataset/serialize.py
================================================
# -*- coding: utf-8 -*-
import json
import xml.etree.ElementTree as ET
import numpy as np
from const import META_STRUCTURE_FORMAT
from api import get_real_bbox, get_mask, rle_encode
def n_tree_serialize(aot):
assert aot.is_pg
ret = ""
if aot.level == "Layout":
return aot.name + "./"
else:
ret += aot.name + "."
for child in aot.children:
x = n_tree_serialize(child)
ret += x
ret += "."
ret += "/"
return ret
def serialize_aot(aot):
"""Meta Structure format
META_STRUCTURE_FORMAT provided by const.py
"""
n_tree = n_tree_serialize(aot)
meta_structure = np.zeros(len(META_STRUCTURE_FORMAT), np.uint8)
split = n_tree.split(".")
for node in split:
try:
node_index = META_STRUCTURE_FORMAT.index(node)
meta_structure[node_index] = 1
except ValueError:
continue
return split, meta_structure
def serialize_rules(rule_groups):
"""Meta matrix format
["Constant", "Progression", "Arithmetic", "Distribute_Three", "Number", "Position", "Type", "Size", "Color"]
"""
meta_matrix = np.zeros((8, 9), np.uint8)
counter = 0
for rule_group in rule_groups:
for rule in rule_group:
if rule.name == "Constant":
meta_matrix[counter, 0] = 1
elif rule.name == "Progression":
meta_matrix[counter, 1] = 1
elif rule.name == "Arithmetic":
meta_matrix[counter, 2] = 1
else:
meta_matrix[counter, 3] = 1
if rule.attr == "Number/Position":
meta_matrix[counter, 4] = 1
meta_matrix[counter, 5] = 1
elif rule.attr == "Number":
meta_matrix[counter, 4] = 1
elif rule.attr == "Position":
meta_matrix[counter, 5] = 1
elif rule.attr == "Type":
meta_matrix[counter, 6] = 1
elif rule.attr == "Size":
meta_matrix[counter, 7] = 1
else:
meta_matrix[counter, 8] = 1
counter += 1
return meta_matrix, np.bitwise_or.reduce(meta_matrix)
def dom_problem(instances, rule_groups):
data = ET.Element("Data")
panels = ET.SubElement(data, "Panels")
for i in range(len(instances)):
panel = instances[i]
panel_i = ET.SubElement(panels, "Panel")
struct = panel.children[0]
struct_i = ET.SubElement(panel_i, "Struct")
struct_i.set("name", struct.name)
for j in range(len(struct.children)):
component = struct.children[j]
component_j = ET.SubElement(struct_i, "Component")
component_j.set("id", str(j))
component_j.set("name", component.name)
layout = component.children[0]
layout_k = ET.SubElement(component_j, "Layout")
layout_k.set("name", layout.name)
layout_k.set("Number", str(layout.number.get_value_level()))
layout_k.set("Position", json.dumps(layout.position.values))
layout_k.set("Uniformity", str(layout.uniformity.get_value_level()))
for l in range(len(layout.children)):
entity = layout.children[l]
entity_l = ET.SubElement(layout_k, "Entity")
entity_bbox = entity.bbox
entity_type = entity.type.get_value()
entity_size = entity.size.get_value()
entity_angle = entity.angle.get_value()
entity_l.set("bbox", json.dumps(entity_bbox))
entity_l.set("real_bbox", json.dumps(get_real_bbox(entity_bbox, entity_type, entity_size, entity_angle)))
entity_l.set("mask", rle_encode(get_mask(entity_bbox, entity_type, entity_size, entity_angle)))
entity_l.set("Type", str(entity.type.get_value_level()))
entity_l.set("Size", str(entity.size.get_value_level()))
entity_l.set("Color", str(entity.color.get_value_level()))
entity_l.set("Angle", str(entity.angle.get_value_level()))
rules = ET.SubElement(data, "Rules")
for i in range(len(rule_groups)):
rule_group = rule_groups[i]
rule_group_i = ET.SubElement(rules, "Rule_Group")
rule_group_i.set("id", str(i))
for rule in rule_group:
rule_j = ET.SubElement(rule_group_i, "Rule")
rule_j.set("name", rule.name)
rule_j.set("attr", rule.attr)
return ET.tostring(data)
================================================
FILE: src/dataset/solver.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
def solve(rule_groups, context, candidates):
"""Search-based Heuristic Solver.
Arguments:
rule_groups(list of list of Rule): rules that apply to each component
context(list of AoTNode): a list of context AoTs in a row-major order;
should be of length 8
candidates(list of AoTNode): a list of candidate answer AoTs;
should be of length 8
Returns:
ans(int): index of the correct answer in the candidates
"""
satisfied = [0] * len(candidates)
for i in range(len(candidates)):
candidate = candidates[i]
# note that rule.component_idx should be the same as j
for j in range(len(rule_groups)):
rule_group = rule_groups[j]
rule_num_pos = rule_group[0]
satisfied[i] += check_num_pos(rule_num_pos, context, candidate)
regenerate = False
if rule_num_pos.attr == "Number" or rule_num_pos.name == "Arithmetic":
regenerate = True
rule_type = rule_group[1]
satisfied[i] += check_entity(rule_type, context, candidate, "Type", regenerate)
rule_size = rule_group[2]
satisfied[i] += check_entity(rule_size, context, candidate, "Size", regenerate)
rule_color = rule_group[3]
satisfied[i] += check_entity(rule_color, context, candidate, "Color", regenerate)
satisfied = np.array(satisfied)
answer_set = np.where(satisfied == max(satisfied))[0]
return np.random.choice(answer_set)
def check_num_pos(rule_num_pos, context, candidate):
"""Check whether Rule on layout attribute is satisfied.
Arguments:
rule_num_pos(Rule): the rule to check
context(list of AoTNode): the 8 context figures
candidate(AoTNode): the candidate AoT
Returns:
ret(int): 0 if failure, 1 if success
"""
ret = 0
component_idx = rule_num_pos.component_idx
row_3_1_layout = context[6].children[0].children[component_idx].children[0]
row_3_2_layout = context[7].children[0].children[component_idx].children[0]
candidate_layout = candidate.children[0].children[component_idx].children[0]
if rule_num_pos.name == "Constant":
set_row_3_1_pos = set(row_3_1_layout.position.get_value_idx())
set_row_3_2_pos = set(row_3_2_layout.position.get_value_idx())
set_candidate_pos = set(candidate_layout.position.get_value_idx())
# note that set equal only when len(Number) equal and content equal
if set_candidate_pos == set_row_3_1_pos and set_candidate_pos == set_row_3_2_pos:
ret = 1
elif rule_num_pos.name == "Progression":
if rule_num_pos.attr == "Number":
row_3_1_num = row_3_1_layout.number.get_value_level()
row_3_2_num = row_3_2_layout.number.get_value_level()
candidate_num = candidate_layout.number.get_value_level()
if row_3_2_num * 2 == row_3_1_num + candidate_num:
ret = 1
else:
row_3_1_pos = row_3_1_layout.position.get_value_idx()
row_3_2_pos = row_3_2_layout.position.get_value_idx()
candidate_pos = candidate_layout.position.get_value_idx()
most_num = len(candidate_layout.position.values)
diff = rule_num_pos.value
if (set((row_3_1_pos + diff) % most_num) == set(row_3_2_pos)) and \
(set((row_3_2_pos + diff) % most_num) == set(candidate_pos)):
ret = 1
elif rule_num_pos.name == "Arithmetic":
mode = rule_num_pos.value
if rule_num_pos.attr == "Number":
row_3_1_num = row_3_1_layout.number.get_value()
row_3_2_num = row_3_2_layout.number.get_value()
candidate_num = candidate_layout.number.get_value()
if mode > 0 and (candidate_num == row_3_1_num + row_3_2_num):
ret = 1
if mode < 0 and (candidate_num == row_3_1_num - row_3_2_num):
ret = 1
else:
row_3_1_pos = row_3_1_layout.position.get_value_idx()
row_3_2_pos = row_3_2_layout.position.get_value_idx()
candidate_pos = candidate_layout.position.get_value_idx()
if mode > 0 and (set(candidate_pos) == set(row_3_1_pos) | set(row_3_2_pos)):
ret = 1
if mode < 0 and (set(candidate_pos) == set(row_3_1_pos) - set(row_3_2_pos)):
ret = 1
else:
three_values = rule_num_pos.value_levels[2]
if rule_num_pos.attr == "Number":
row_3_1_num = row_3_1_layout.number.get_value_level()
row_3_2_num = row_3_2_layout.number.get_value_level()
candidate_num = candidate_layout.number.get_value_level()
if row_3_1_num == three_values[0] and \
row_3_2_num == three_values[1] and \
candidate_num == three_values[2]:
ret = 1
else:
row_3_1_pos = row_3_1_layout.position.get_value_idx()
row_3_2_pos = row_3_2_layout.position.get_value_idx()
candidate_pos = candidate_layout.position.get_value_idx()
if set(row_3_1_pos) == set(three_values[0]) and \
set(row_3_2_pos) == set(three_values[1]) and \
set(candidate_pos) == set(three_values[2]):
ret = 1
return ret
def check_consistency(candidate, attr, component_idx):
candidate_layout = candidate.children[0].children[component_idx].children[0]
entity_0 = candidate_layout.children[0]
attr_name = attr.lower()
entity_0_value = getattr(entity_0, attr_name).get_value_level()
for i in range(1, len(candidate_layout.children)):
entity_i = candidate_layout.children[i]
entity_i_value = getattr(entity_i, attr_name).get_value_level()
if entity_i_value != entity_0_value:
return False
return True
def check_entity(rule, context, candidate, attr, regenerate):
"""Check whether Rule on entity attribute is satisfied.
Arguments:
rule(Rule): the rule to check
context(list of AoTNode): the 8 context figures
candidate(AoTNode): the candidate AoT
attr(str): attribute name
Returns:
ret(int): 0 if failure, 1 if success
"""
ret = 0
component_idx = rule.component_idx
row_3_1_layout = context[6].children[0].children[component_idx].children[0]
row_3_2_layout = context[7].children[0].children[component_idx].children[0]
candidate_layout = candidate.children[0].children[component_idx].children[0]
uni = candidate_layout.uniformity.get_value()
attr_name = attr.lower()
if rule.name == "Constant":
if uni:
if check_consistency(candidate, attr, component_idx):
if getattr(candidate_layout.children[0], attr_name).get_value_level() == \
getattr(row_3_2_layout.children[0], attr_name).get_value_level():
ret = 1
else:
row_3_1_num = row_3_1_layout.number.get_value_level()
row_3_2_num = row_3_2_layout.number.get_value_level()
candidate_num = candidate_layout.number.get_value_level()
if (row_3_1_num == row_3_2_num) and (row_3_2_num == candidate_num):
if regenerate:
ret = 1
else:
flag = True
for i in range(len(candidate_layout.children)):
if not (getattr(candidate_layout.children[i], attr_name).get_value_level() ==
getattr(row_3_2_layout.children[i], attr_name).get_value_level()):
flag = False
break
if flag:
ret = 1
else:
ret = 1
elif rule.name == "Progression":
if check_consistency(candidate, attr, component_idx):
row_3_1_value = getattr(row_3_1_layout.children[0], attr_name).get_value_level()
row_3_2_value = getattr(row_3_2_layout.children[0], attr_name).get_value_level()
candidate_value = getattr(candidate_layout.children[0], attr_name).get_value_level()
if row_3_2_value * 2 == row_3_1_value + candidate_value:
ret = 1
elif rule.name == "Arithmetic":
if check_consistency(candidate, attr, component_idx):
row_3_1_value = getattr(row_3_1_layout.children[0], attr_name).get_value_level()
row_3_2_value = getattr(row_3_2_layout.children[0], attr_name).get_value_level()
candidate_value = getattr(candidate_layout.children[0], attr_name).get_value_level()
if rule.value > 0:
if attr == "Color":
if candidate_value == row_3_1_value + row_3_2_value:
ret = 1
else:
if candidate_value == row_3_1_value + row_3_2_value + 1:
ret = 1
if rule.value < 0:
if attr == "Color":
if candidate_value == row_3_1_value - row_3_2_value:
ret = 1
else:
if candidate_value == row_3_1_value - row_3_2_value - 1:
ret = 1
else:
if check_consistency(candidate, attr, component_idx):
row_3_1_value = getattr(row_3_1_layout.children[0], attr_name).get_value_level()
row_3_2_value = getattr(row_3_2_layout.children[0], attr_name).get_value_level()
candidate_value = getattr(candidate_layout.children[0], attr_name).get_value_level()
three_values = rule.value_levels[2]
if row_3_1_value == three_values[0] and \
row_3_2_value == three_values[1] and \
candidate_value == three_values[2]:
ret = 1
return ret
================================================
FILE: src/model/__init__.py
================================================
""" RAVEN benchmarking code
Author: Chi Zhang
Data: 05/14/2019
Contact: chi.zhang@ucla.edu
"""
================================================
FILE: src/model/basic_model.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicModel(nn.Module):
def __init__(self, args):
super(BasicModel, self).__init__()
self.name = args.model
def load_model(self, path, epoch):
state_dict = torch.load(path+'{}_epoch_{}.pth'.format(self.name, epoch))['state_dict']
self.load_state_dict(state_dict)
def save_model(self, path, epoch, acc, loss):
torch.save({'state_dict': self.state_dict(), 'acc': acc, 'loss': loss}, path+'{}_epoch_{}.pth'.format(self.name, epoch))
def compute_loss(self, output, target, meta_target, meta_structure):
pass
def train_(self, image, target, meta_target, meta_structure, embedding, indicator):
self.optimizer.zero_grad()
output = self(image, embedding, indicator)
loss = self.compute_loss(output, target, meta_target, meta_structure)
loss.backward()
self.optimizer.step()
pred = output[0].data.max(1)[1]
correct = pred.eq(target.data).cpu().sum().numpy()
accuracy = correct * 100.0 / target.size()[0]
return loss.item(), accuracy
def validate_(self, image, target, meta_target, meta_structure, embedding, indicator):
with torch.no_grad():
output = self(image, embedding, indicator)
loss = self.compute_loss(output, target, meta_target, meta_structure)
pred = output[0].data.max(1)[1]
correct = pred.eq(target.data).cpu().sum().numpy()
accuracy = correct * 100.0 / target.size()[0]
return loss.item(), accuracy
def test_(self, image, target, meta_target, meta_structure, embedding, indicator):
with torch.no_grad():
output = self(image, embedding, indicator)
pred = output[0].data.max(1)[1]
correct = pred.eq(target.data).cpu().sum().numpy()
accuracy = correct * 100.0 / target.size()[0]
return accuracy
================================================
FILE: src/model/cnn_lstm.py
================================================
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from basic_model import BasicModel
from fc_tree_net import FCTreeNet
class conv_module(nn.Module):
def __init__(self):
super(conv_module, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2)
self.batch_norm1 = nn.BatchNorm2d(16)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2)
self.batch_norm2 = nn.BatchNorm2d(16)
self.relu2 = nn.ReLU()
self.conv3 = nn.Conv2d(16, 16, kernel_size=3, stride=2)
self.batch_norm3 = nn.BatchNorm2d(16)
self.relu3 = nn.ReLU()
self.conv4 = nn.Conv2d(16, 16, kernel_size=3, stride=2)
self.batch_norm4 = nn.BatchNorm2d(16)
self.relu4 = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.relu1(self.batch_norm1(x))
x = self.conv2(x)
x = self.relu2(self.batch_norm2(x))
x = self.conv3(x)
x = self.relu3(self.batch_norm3(x))
x = self.conv4(x)
x = self.relu4(self.batch_norm4(x))
return x.view(-1, 16, 16*4*4)
class lstm_module(nn.Module):
def __init__(self):
super(lstm_module, self).__init__()
self.lstm = nn.LSTM(input_size=16*4*4, hidden_size=96, num_layers=1)
self.dropout = nn.Dropout(0.5)
self.fc = nn.Linear(96, 8)
def forward(self, x):
x = x.permute(1, 0, 2)
hidden, _ = self.lstm(x)
score = self.fc(hidden[-1, :, :])
return score
class CNN_LSTM(BasicModel):
def __init__(self, args):
super(CNN_LSTM, self).__init__(args)
self.conv = conv_module()
self.lstm = lstm_module()
self.fc_tree_net = FCTreeNet(in_dim=300, img_dim=256)
self.optimizer = optim.Adam(self.parameters(), lr=args.lr, betas=(args.beta1, args.beta2), eps=args.epsilon)
def compute_loss(self, output, target, meta_target, meta_structure):
pred = output[0]
loss = F.cross_entropy(pred, target)
return loss
def forward(self, x, embedding, indicator):
alpha = 1.0
features = self.conv(x.view(-1, 1, 80, 80))
features_tree = self.fc_tree_net(features, embedding, indicator)
features_tree = features_tree.view(-1, 16, 256)
final_features = features + alpha * features_tree
score = self.lstm(final_features)
return score, None
================================================
FILE: src/model/cnn_mlp.py
================================================
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from basic_model import BasicModel
from fc_tree_net import FCTreeNet
class conv_module(nn.Module):
def __init__(self):
super(conv_module, self).__init__()
self.conv1 = nn.Conv2d(16, 32, kernel_size=3, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(32, 32, kernel_size=3, stride=2)
self.batch_norm2 = nn.BatchNorm2d(32)
self.relu2 = nn.ReLU()
self.conv3 = nn.Conv2d(32, 32, kernel_size=3, stride=2)
self.batch_norm3 = nn.BatchNorm2d(32)
self.relu3 = nn.ReLU()
self.conv4 = nn.Conv2d(32, 32, kernel_size=3, stride=2)
self.batch_norm4 = nn.BatchNorm2d(32)
self.relu4 = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.relu1(self.batch_norm1(x))
x = self.conv2(x)
x = self.relu2(self.batch_norm2(x))
x = self.conv3(x)
x = self.relu3(self.batch_norm3(x))
x = self.conv4(x)
x = self.relu4(self.batch_norm4(x))
return x.view(-1, 32*4*4)
class mlp_module(nn.Module):
def __init__(self):
super(mlp_module, self).__init__()
self.fc1 = nn.Linear(32*4*4, 512)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(512, 8)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.relu1(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
class CNN_MLP(BasicModel):
def __init__(self, args):
super(CNN_MLP, self).__init__(args)
self.conv = conv_module()
self.mlp = mlp_module()
self.fc_tree_net = FCTreeNet(in_dim=300, img_dim=512)
self.optimizer = optim.Adam(self.parameters(), lr=args.lr, betas=(args.beta1, args.beta2), eps=args.epsilon)
def compute_loss(self, output, target, meta_target, meta_structure):
pred = output[0]
loss = F.cross_entropy(pred, target)
return loss
def forward(self, x, embedding, indicator):
alpha = 1.0
features = self.conv(x.view(-1, 16, 80, 80))
features_tree = features.view(-1, 1, 512)
features_tree = self.fc_tree_net(features_tree, embedding, indicator)
final_features = features + alpha * features_tree
score = self.mlp(final_features)
return score, None
================================================
FILE: src/model/const/__init__.py
================================================
from const import *
================================================
FILE: src/model/const/const.py
================================================
# -*- coding: utf-8 -*-
# Maximum number of components in a RPM
MAX_COMPONENTS = 2
# Canvas parameters
IMAGE_SIZE = 160
CENTER = (IMAGE_SIZE / 2, IMAGE_SIZE / 2)
DEFAULT_RADIUS = IMAGE_SIZE / 4
DEFAULT_WIDTH = 2
# Attribute parameters
# Number
NUM_VALUES = [1, 2, 3, 4, 5, 6, 7, 8, 9]
NUM_MIN = 0
NUM_MAX = len(NUM_VALUES) - 1
# Uniformity
UNI_VALUES = [False, False, False, True]
UNI_MIN = 0
UNI_MAX = len(UNI_VALUES) - 1
# Type
TYPE_VALUES = ["none", "triangle", "square", "pentagon", "hexagon", "circle"]
TYPE_MIN = 0
TYPE_MAX = len(TYPE_VALUES) - 1
# Size
SIZE_VALUES = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
SIZE_MIN = 0
SIZE_MAX = len(SIZE_VALUES) - 1
# Color
COLOR_VALUES = [255, 224, 196, 168, 140, 112, 84, 56, 28, 0]
COLOR_MIN = 0
COLOR_MAX = len(COLOR_VALUES) - 1
# Angle: self-rotation
ANGLE_VALUES = [-135, -90, -45, 0, 45, 90, 135, 180]
ANGLE_MIN = 0
ANGLE_MAX = len(ANGLE_VALUES) - 1
META_STRUCTURE_FORMAT = ["Singleton", "Left_Right", "Up_Down", "Out_In", "Left", "Right", \
"Up","Down", "Out", "In", "Grid", "Center_Single", "Distribute_Four",
"Distribute_Nine", "Left_Center_Single", "Right_Center_Single", \
"Up_Center_Single", "Down_Center_Single", "Out_Center_Single", "In_Center_Single", "In_Distribute_Four"]
# Rule, Attr, Param
# The design encodes rule priority order: Number/Position always comes first
# Number and Position could not both be sampled
# Progression on Number: Number on each Panel +1/2 or -1/2
# Progression on Position: Entities on each Panel roll over the layout
# Arithmetic on Number: Numeber on the third Panel = Number on first +/- Number on second (1 for + and -1 for -)
# Arithmetic on Position: 1 for SET_UNION and -1 for SET_DIFF
# Distribute_Three on Number: Three numbers through each row
# Distribute_Three on Position: Three positions (same number) through each row
# Constant on Number/Position: Nothing changes
# Progression on Type: Type progression defined as the number of edges on each entity (Triangle, Square, Pentagon, Hexagon, Circle)
# Distribute_Three on Type: Three types through each row
# Constant on Type: Nothing changes
# Progression on Size: Size on each entity +1/2 or -1/2
# Arithmetic on Size: Size on the third Panel = Size on the first +/- Size on the second (1 for + and -1 for -)
# Distribute_Three on Size: Three sizes through each row
# Constant on Size: Nothing changes
# Progression on Color: Color +1/2 or -1/2
# Arithmetic on Color: Color on the third Panel = Color on the first +/- Color on the second (1 for + and -1 for -)
# Distribute_Three on Color: Three colors through each row
# Constant on Color: Nothing changes
# Note that all rules on Type, Size and Color enforce value consistency in a panel
RULE_ATTR = [[["Progression", "Number", [-2, -1, 1, 2]],
["Progression", "Position", [-2, -1, 1, 2]],
["Arithmetic", "Number", [1, -1]],
["Arithmetic", "Position", [1, -1]],
["Distribute_Three", "Number", None],
["Distribute_Three", "Position", None],
["Constant", "Number/Position", None]],
[["Progression", "Type", [-2, -1, 1, 2]],
["Distribute_Three", "Type", None],
["Constant", "Type", None]],
[["Progression", "Size", [-2, -1, 1, 2]],
["Arithmetic", "Size", [1, -1]],
["Distribute_Three", "Size", None],
["Constant", "Size", None]],
[["Progression", "Color", [-2, -1, 1, 2]],
["Arithmetic", "Color", [1, -1]],
["Distribute_Three", "Color", None],
["Constant", "Color", None]]]
================================================
FILE: src/model/fc_tree_net.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class FCTreeNet(torch.nn.Module):
def __init__(self, in_dim=300, img_dim=256, use_cuda=True):
'''
initialization for TreeNet model, basically a ChildSumLSTM model
with non-linear activation embedding for different nodes in the AoG.
Shared weigths for all LSTM cells.
:param in_dim: input feature dimension for word embedding (from string to vector space)
:param img_dim: dimension of the input image feature, should be (panel_pair_number * img_feature_dim (e.g. 512 or 256))
'''
super(FCTreeNet, self).__init__()
self.in_dim = in_dim
self.img_dim = img_dim
self.fc = nn.Linear(self.in_dim, self.in_dim)
self.leaf = nn.Linear(self.in_dim + self.img_dim, self.img_dim)
self.middle = nn.Linear(self.in_dim + self.img_dim, self.img_dim)
self.merge = nn.Linear(self.in_dim + self.img_dim, self.img_dim)
self.root = nn.Linear(self.in_dim + self.img_dim, self.img_dim)
self.relu = nn.ReLU()
def forward(self, image_feature, input, indicator):
'''
Forward funciton for TreeNet model
:param input: input should be (batch_size * 6 * input_word_embedding_dimension), got from the embedding vector
:param indicator: indicating whether the input is of structure with branches (batch_size * 1)
:param image_feature: input dictionary for each node, primarily feature, for example (batch_size * 16 (panel_pair_number) * feature_dim (output from CNN))
:return:
'''
# image_feature = image_feature.view(-1, 16, image_feature.size(2))
input = self.fc(input.view(-1, input.size(-1)))
input = input.view(-1, 6, input.size(-1))
input = input.unsqueeze(1).repeat(1, image_feature.size(1), 1, 1)
indicator = indicator.unsqueeze(1).repeat(1, image_feature.size(1), 1).view(-1, 1)
leaf_left = input[:, :, 3, :].view(-1, input.size(-1)) # (batch_size * panel_pair_num) * input_word_embedding_dimension
leaf_right = input[:, :, 5, :].view(-1, input.size(-1))
inter_left = input[:, :, 2, :].view(-1, input.size(-1))
inter_right = input[:, :, 4, :].view(-1, input.size(-1))
merge = input[:, :, 1, :].view(-1, input.size(-1))
root = input[:, :, 0, :].view(-1, input.size(-1))
# concating image_feature and word_embeddings for leaf node inputs
leaf_left = torch.cat((leaf_left, image_feature.view(-1, image_feature.size(-1))), dim=-1)
leaf_right = torch.cat((leaf_right, image_feature.view(-1, image_feature.size(-1))), dim=-1)
out_leaf_left = self.leaf(leaf_left)
out_leaf_right = self.leaf(leaf_right)
out_leaf_left = self.relu(out_leaf_left)
out_leaf_right = self.relu(out_leaf_right)
out_left = self.middle(torch.cat((inter_left, out_leaf_left), dim=-1))
out_right = self.middle(torch.cat((inter_right, out_leaf_right), dim=-1))
out_left = self.relu(out_left)
out_right = self.relu(out_right)
out_right = torch.mul(out_right, indicator)
merge_input = torch.cat((merge, out_left + out_right), dim=-1)
out_merge = self.merge(merge_input)
out_merge = self.relu(out_merge)
out_root = self.root(torch.cat((root, out_merge), dim=-1))
out_root = self.relu(out_root)
# size ((batch_size * panel_pair) * feature_dim)
return out_root
================================================
FILE: src/model/main.py
================================================
import os
import numpy as np
import argparse
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from utility import dataset, ToTensor
from cnn_mlp import CNN_MLP
from cnn_lstm import CNN_LSTM
from resnet18 import Resnet18_MLP
parser = argparse.ArgumentParser(description='our_model')
parser.add_argument('--model', type=str, default='Resnet18_MLP')
parser.add_argument('--epochs', type=int, default=200)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--seed', type=int, default=12345)
parser.add_argument('--device', type=int, default=0)
parser.add_argument('--load_workers', type=int, default=16)
parser.add_argument('--resume', type=bool, default=False)
parser.add_argument('--path', type=str, default='/home/chizhang/Datasets/RAVEN-10000/')
parser.add_argument('--save', type=str, default='./experiments/checkpoint/')
parser.add_argument('--img_size', type=int, default=224)
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--beta1', type=float, default=0.9)
parser.add_argument('--beta2', type=float, default=0.999)
parser.add_argument('--epsilon', type=float, default=1e-8)
parser.add_argument('--meta_alpha', type=float, default=0.0)
parser.add_argument('--meta_beta', type=float, default=0.0)
args = parser.parse_args()
args.cuda = torch.cuda.is_available()
torch.cuda.set_device(args.device)
if args.cuda:
torch.cuda.manual_seed(args.seed)
if not os.path.exists(args.save):
os.makedirs(args.save)
train = dataset(args.path, "train", args.img_size, transform=transforms.Compose([ToTensor()]))
valid = dataset(args.path, "val", args.img_size, transform=transforms.Compose([ToTensor()]))
test = dataset(args.path, "test", args.img_size, transform=transforms.Compose([ToTensor()]))
trainloader = DataLoader(train, batch_size=args.batch_size, shuffle=True, num_workers=16)
validloader = DataLoader(valid, batch_size=args.batch_size, shuffle=False, num_workers=16)
testloader = DataLoader(test, batch_size=args.batch_size, shuffle=False, num_workers=16)
if args.model == "CNN_MLP":
model = CNN_MLP(args)
elif args.model == "CNN_LSTM":
model = CNN_LSTM(args)
elif args.model == "Resnet18_MLP":
model = Resnet18_MLP(args)
if args.resume:
model.load_model(args.save, 0)
print('Loaded model')
if args.cuda:
model = model.cuda()
def train(epoch):
model.train()
train_loss = 0
accuracy = 0
loss_all = 0.0
acc_all = 0.0
counter = 0
for batch_idx, (image, target, meta_target, meta_structure, embedding, indicator) in enumerate(trainloader):
counter += 1
if args.cuda:
image = image.cuda()
target = target.cuda()
meta_target = meta_target.cuda()
meta_structure = meta_structure.cuda()
embedding = embedding.cuda()
indicator = indicator.cuda()
loss, acc = model.train_(image, target, meta_target, meta_structure, embedding, indicator)
print('Train: Epoch:{}, Batch:{}, Loss:{:.6f}, Acc:{:.4f}.'.format(epoch, batch_idx, loss, acc))
loss_all += loss
acc_all += acc
if counter > 0:
print("Avg Training Loss: {:.6f}".format(loss_all/float(counter)))
def validate(epoch):
model.eval()
val_loss = 0
accuracy = 0
loss_all = 0.0
acc_all = 0.0
counter = 0
for batch_idx, (image, target, meta_target, meta_structure, embedding, indicator) in enumerate(validloader):
counter += 1
if args.cuda:
image = image.cuda()
target = target.cuda()
meta_target = meta_target.cuda()
meta_structure = meta_structure.cuda()
embedding = embedding.cuda()
indicator = indicator.cuda()
loss, acc = model.validate_(image, target, meta_target, meta_structure, embedding, indicator)
# print('Validate: Epoch:{}, Batch:{}, Loss:{:.6f}, Acc:{:.4f}.'.format(epoch, batch_idx, loss, acc))
loss_all += loss
acc_all += acc
if counter > 0:
print("Total Validation Loss: {:.6f}, Acc: {:.4f}".format(loss_all/float(counter), acc_all/float(counter)))
return loss_all/float(counter), acc_all/float(counter)
def test(epoch):
model.eval()
accuracy = 0
acc_all = 0.0
counter = 0
for batch_idx, (image, target, meta_target, meta_structure, embedding, indicator) in enumerate(testloader):
counter += 1
if args.cuda:
image = image.cuda()
target = target.cuda()
meta_target = meta_target.cuda()
meta_structure = meta_structure.cuda()
embedding = embedding.cuda()
indicator = indicator.cuda()
acc = model.test_(image, target, meta_target, meta_structure, embedding, indicator)
# print('Test: Epoch:{}, Batch:{}, Acc:{:.4f}.'.format(epoch, batch_idx, acc))
acc_all += acc
if counter > 0:
print("Total Testing Acc: {:.4f}".format(acc_all / float(counter)))
return acc_all/float(counter)
def main():
for epoch in range(0, args.epochs):
train(epoch)
avg_loss, avg_acc = validate(epoch)
test(epoch)
model.save_model(args.save, epoch, avg_acc, avg_loss)
if __name__ == '__main__':
main()
================================================
FILE: src/model/resnet18.py
================================================
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.models as models
from basic_model import BasicModel
from fc_tree_net import FCTreeNet
class identity(nn.Module):
def __init__(self):
super(identity, self).__init__()
def forward(self, x):
return x
class mlp_module(nn.Module):
def __init__(self):
super(mlp_module, self).__init__()
self.fc1 = nn.Linear(512, 512)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(512, 8+9+21)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.relu1(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
class Resnet18_MLP(BasicModel):
def __init__(self, args):
super(Resnet18_MLP, self).__init__(args)
self.resnet18 = models.resnet18(pretrained=False)
self.resnet18.conv1 = nn.Conv2d(16, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.resnet18.fc = identity()
self.mlp = mlp_module()
self.fc_tree_net = FCTreeNet(in_dim=300, img_dim=512)
self.optimizer = optim.Adam(self.parameters(), lr=args.lr, betas=(args.beta1, args.beta2), eps=args.epsilon)
self.meta_alpha = args.meta_alpha
self.meta_beta = args.meta_beta
def compute_loss(self, output, target, meta_target, meta_structure):
pred, meta_target_pred, meta_struct_pred = output[0], output[1], output[2]
target_loss = F.cross_entropy(pred, target)
meta_target_pred = torch.chunk(meta_target_pred, chunks=9, dim=1)
meta_target = torch.chunk(meta_target, chunks=9, dim=1)
meta_target_loss = 0.
for idx in range(0, 9):
meta_target_loss += F.binary_cross_entropy(F.sigmoid(meta_target_pred[idx]), meta_target[idx])
meta_struct_pred = torch.chunk(meta_struct_pred, chunks=21, dim=1)
meta_structure = torch.chunk(meta_structure, chunks=21, dim=1)
meta_struct_loss = 0.
for idx in range(0, 21):
meta_struct_loss += F.binary_cross_entropy(F.sigmoid(meta_struct_pred[idx]), meta_structure[idx])
loss = target_loss + self.meta_alpha*meta_struct_loss/21. + self.meta_beta*meta_target_loss/9.
return loss
def forward(self, x, embedding, indicator):
alpha = 1.0
features = self.resnet18(x.view(-1, 16, 224, 224))
features_tree = features.view(-1, 1, 512)
features_tree = self.fc_tree_net(features_tree, embedding, indicator)
final_features = features + alpha * features_tree
output = self.mlp(final_features)
pred = output[:,0:8]
meta_target_pred = output[:,8:17]
meta_struct_pred = output[:,17:38]
return pred, meta_target_pred, meta_struct_pred
================================================
FILE: src/model/utility/__init__.py
================================================
from dataset_utility import *
================================================
FILE: src/model/utility/dataset_utility.py
================================================
import os
import glob
import numpy as np
from scipy import misc
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
class ToTensor(object):
def __call__(self, sample):
return torch.tensor(sample, dtype=torch.float32)
class dataset(Dataset):
def __init__(self, root_dir, dataset_type, img_size, transform=None, shuffle=False):
self.root_dir = root_dir
self.transform = transform
self.file_names = [f for f in glob.glob(os.path.join(root_dir, "*", "*.npz")) \
if dataset_type in f]
self.img_size = img_size
self.embeddings = np.load(os.path.join(root_dir, 'embedding.npy'), allow_pickle=True)
self.shuffle = shuffle
def __len__(self):
return len(self.file_names)
def __getitem__(self, idx):
data_path = self.file_names[idx]
data = np.load(data_path)
image = data["image"].reshape(16, 160, 160)
target = data["target"]
structure = data["structure"]
meta_target = data["meta_target"]
meta_structure = data["meta_structure"]
if self.shuffle:
context = image[:8, :, :]
choices = image[8:, :, :]
indices = range(8)
np.random.shuffle(indices)
new_target = indices.index(target)
new_choices = choices[indices, :, :]
image = np.concatenate((context, new_choices))
target = new_target
resize_image = []
for idx in range(0, 16):
resize_image.append(misc.imresize(image[idx,:,:], (self.img_size, self.img_size)))
resize_image = np.stack(resize_image)
# image = resize(image, (16, 128, 128))
# meta_matrix = data["mata_matrix"]
embedding = torch.zeros((6, 300), dtype=torch.float)
indicator = torch.zeros(1, dtype=torch.float)
element_idx = 0
for element in structure:
if element != '/':
embedding[element_idx, :] = torch.tensor(self.embeddings.item().get(element), dtype=torch.float)
element_idx += 1
if element_idx == 6:
indicator[0] = 1.
# if meta_target.dtype == np.int8:
# meta_target = meta_target.astype(n
gitextract_dccjdwzt/
├── .gitignore
├── LICENSE
├── README.md
├── assets/
│ ├── README.md
│ └── embedding.npy
├── requirements.txt
└── src/
├── dataset/
│ ├── AoT.py
│ ├── Attribute.py
│ ├── Rule.py
│ ├── __init__.py
│ ├── api.py
│ ├── build_tree.py
│ ├── const.py
│ ├── constraints.py
│ ├── main.py
│ ├── rendering.py
│ ├── sampling.py
│ ├── serialize.py
│ └── solver.py
└── model/
├── __init__.py
├── basic_model.py
├── cnn_lstm.py
├── cnn_mlp.py
├── const/
│ ├── __init__.py
│ └── const.py
├── fc_tree_net.py
├── main.py
├── resnet18.py
└── utility/
├── __init__.py
└── dataset_utility.py
SYMBOL INDEX (212 symbols across 18 files)
FILE: src/dataset/AoT.py
class AoTNode (line 13) | class AoTNode(object):
method __init__ (line 22) | def __init__(self, name, level, node_type, is_pg=False):
method insert (line 29) | def insert(self, node):
method _insert (line 39) | def _insert(self, node):
method _resample (line 49) | def _resample(self, change_number):
method __repr__ (line 62) | def __repr__(self):
method __str__ (line 65) | def __str__(self):
class Root (line 69) | class Root(AoTNode):
method __init__ (line 71) | def __init__(self, name, is_pg=False):
method sample (line 74) | def sample(self):
method resample (line 88) | def resample(self, change_number=False):
method prune (line 91) | def prune(self, rule_groups):
method prepare (line 110) | def prepare(self):
method sample_new (line 128) | def sample_new(self, component_idx, attr_name, min_level, max_level, r...
class Structure (line 141) | class Structure(AoTNode):
method __init__ (line 143) | def __init__(self, name, is_pg=False):
method _sample (line 146) | def _sample(self):
method _prune (line 154) | def _prune(self, rule_groups):
method _sample_new (line 166) | def _sample_new(self, component_idx, attr_name, min_level, max_level, ...
class Component (line 170) | class Component(AoTNode):
method __init__ (line 172) | def __init__(self, name, is_pg=False):
method _sample (line 175) | def _sample(self):
method _prune (line 183) | def _prune(self, rule_group):
method _sample_new (line 193) | def _sample_new(self, attr_name, min_level, max_level, component):
class Layout (line 197) | class Layout(AoTNode):
method __init__ (line 202) | def __init__(self, name, layout_constraint, entity_constraint,
method add_new (line 231) | def add_new(self, *bboxes):
method resample (line 248) | def resample(self, change_number=False):
method _sample (line 251) | def _sample(self):
method _resample (line 277) | def _resample(self, change_number):
method _update_constraint (line 303) | def _update_constraint(self, rule_group):
method reset_constraint (line 360) | def reset_constraint(self, attr):
method _sample_new (line 366) | def _sample_new(self, attr_name, min_level, max_level, layout):
class Entity (line 414) | class Entity(AoTNode):
method __init__ (line 416) | def __init__(self, name, bbox, entity_constraint):
method reset_constraint (line 432) | def reset_constraint(self, attr, min_level, max_level):
method resample (line 439) | def resample(self):
FILE: src/dataset/Attribute.py
class Attribute (line 12) | class Attribute(object):
method __init__ (line 24) | def __init__(self, name):
method sample (line 30) | def sample(self):
method get_value (line 33) | def get_value(self):
method set_value (line 36) | def set_value(self):
method __repr__ (line 39) | def __repr__(self):
method __str__ (line 42) | def __str__(self):
class Number (line 46) | class Number(Attribute):
method __init__ (line 48) | def __init__(self, min_level=NUM_MIN, max_level=NUM_MAX):
method sample (line 55) | def sample(self, min_level=NUM_MIN, max_level=NUM_MAX):
method sample_new (line 62) | def sample_new(self, min_level=None, max_level=None, previous_values=N...
method get_value_level (line 78) | def get_value_level(self):
method set_value_level (line 81) | def set_value_level(self, value_level):
method get_value (line 84) | def get_value(self, value_level=None):
class Type (line 90) | class Type(Attribute):
method __init__ (line 92) | def __init__(self, min_level=TYPE_MIN, max_level=TYPE_MAX):
method sample (line 99) | def sample(self, min_level=TYPE_MIN, max_level=TYPE_MAX):
method sample_new (line 104) | def sample_new(self, min_level=None, max_level=None, previous_values=N...
method get_value_level (line 116) | def get_value_level(self):
method set_value_level (line 119) | def set_value_level(self, value_level):
method get_value (line 122) | def get_value(self, value_level=None):
class Size (line 128) | class Size(Attribute):
method __init__ (line 130) | def __init__(self, min_level=SIZE_MIN, max_level=SIZE_MAX):
method sample (line 137) | def sample(self, min_level=SIZE_MIN, max_level=SIZE_MAX):
method sample_new (line 142) | def sample_new(self, min_level=None, max_level=None, previous_values=N...
method get_value_level (line 154) | def get_value_level(self):
method set_value_level (line 157) | def set_value_level(self, value_level):
method get_value (line 160) | def get_value(self, value_level=None):
class Color (line 166) | class Color(Attribute):
method __init__ (line 168) | def __init__(self, min_level=COLOR_MIN, max_level=COLOR_MAX):
method sample (line 175) | def sample(self, min_level=COLOR_MIN, max_level=COLOR_MAX):
method sample_new (line 180) | def sample_new(self, min_level=None, max_level=None, previous_values=N...
method get_value_level (line 192) | def get_value_level(self):
method set_value_level (line 195) | def set_value_level(self, value_level):
method get_value (line 198) | def get_value(self, value_level=None):
class Angle (line 204) | class Angle(Attribute):
method __init__ (line 206) | def __init__(self, min_level=ANGLE_MIN, max_level=ANGLE_MAX):
method sample (line 213) | def sample(self, min_level=ANGLE_MIN, max_level=ANGLE_MAX):
method sample_new (line 218) | def sample_new(self, min_level=None, max_level=None, previous_values=N...
method get_value_level (line 230) | def get_value_level(self):
method set_value_level (line 233) | def set_value_level(self, value_level):
method get_value (line 236) | def get_value(self, value_level=None):
class Uniformity (line 242) | class Uniformity(Attribute):
method __init__ (line 244) | def __init__(self, min_level=UNI_MIN, max_level=UNI_MAX):
method sample (line 251) | def sample(self):
method sample_new (line 254) | def sample_new(self):
method set_value_level (line 258) | def set_value_level(self, value_level):
method get_value_level (line 261) | def get_value_level(self):
method get_value (line 264) | def get_value(self, value_level=None):
class Position (line 270) | class Position(Attribute):
method __init__ (line 276) | def __init__(self, pos_type, pos_list):
method sample (line 293) | def sample(self, num):
method sample_new (line 302) | def sample_new(self, num, previous_values=None):
method sample_add (line 322) | def sample_add(self, num):
method get_value_idx (line 337) | def get_value_idx(self):
method set_value_idx (line 340) | def set_value_idx(self, value_idx):
method get_value (line 344) | def get_value(self, value_idx=None):
method remove (line 352) | def remove(self, bbox):
FILE: src/dataset/Rule.py
function Rule_Wrapper (line 11) | def Rule_Wrapper(name, attr, param, component_idx):
class Rule (line 26) | class Rule(object):
method __init__ (line 31) | def __init__(self, name, attr, params, component_idx=0):
method sample (line 47) | def sample(self):
method apply_rule (line 53) | def apply_rule(self, aot, in_aot=None):
class Constant (line 65) | class Constant(Rule):
method __init__ (line 69) | def __init__(self, name, attr, param, component_idx):
method apply_rule (line 72) | def apply_rule(self, aot, in_aot=None):
class Progression (line 78) | class Progression(Rule):
method __init__ (line 82) | def __init__(self, name, attr, param, component_idx):
method apply_rule (line 87) | def apply_rule(self, aot, in_aot=None):
class Arithmetic (line 141) | class Arithmetic(Rule):
method __init__ (line 146) | def __init__(self, name, attr, param, component_idx):
method apply_rule (line 152) | def apply_rule(self, aot, in_aot=None):
class Distribute_Three (line 326) | class Distribute_Three(Rule):
method __init__ (line 330) | def __init__(self, name, attr, param, component_idx):
method apply_rule (line 335) | def apply_rule(self, aot, in_aot=None):
FILE: src/dataset/api.py
class Bunch (line 12) | class Bunch:
method __init__ (line 14) | def __init__(self, **kwds):
function get_real_bbox (line 18) | def get_real_bbox(entity_bbox, entity_type, entity_size, entity_angle):
function get_mask (line 67) | def get_mask(entity_bbox, entity_type, entity_size, entity_angle):
function rle_encode (line 80) | def rle_encode(img):
function rle_decode (line 92) | def rle_decode(mask_rle, shape):
FILE: src/dataset/build_tree.py
function build_center_single (line 9) | def build_center_single():
function build_distribute_four (line 34) | def build_distribute_four():
function build_distribute_nine (line 62) | def build_distribute_nine():
function build_left_center_single_right_center_single (line 95) | def build_left_center_single_right_center_single():
function build_up_center_single_down_center_single (line 133) | def build_up_center_single_down_center_single():
function build_in_center_single_out_center_single (line 171) | def build_in_center_single_out_center_single():
function build_in_distribute_four_out_center_single (line 211) | def build_in_distribute_four_out_center_single():
FILE: src/dataset/constraints.py
function gen_layout_constraint (line 9) | def gen_layout_constraint(pos_type, pos_list,
function gen_entity_constraint (line 18) | def gen_entity_constraint(type_min=TYPE_MIN, type_max=TYPE_MAX,
function rule_constraint (line 29) | def rule_constraint(rule_list, num_min, num_max,
FILE: src/dataset/main.py
function merge_component (line 28) | def merge_component(dst_aot, src_aot, component_idx):
function fuse (line 33) | def fuse(args, all_configs):
function separate (line 161) | def separate(args, all_configs):
function main (line 290) | def main():
FILE: src/dataset/rendering.py
function imshow (line 12) | def imshow(array):
function imsave (line 17) | def imsave(array, filepath):
function generate_matrix (line 22) | def generate_matrix(array_list):
function generate_answers (line 37) | def generate_answers(array_list):
function generate_matrix_answer (line 51) | def generate_matrix_answer(array_list):
function merge_matrix_answer (line 66) | def merge_matrix_answer(matrix, answer):
function render_panel (line 74) | def render_panel(root):
function render_structure (line 89) | def render_structure(structure_name):
function render_entity (line 102) | def render_entity(entity):
function shift (line 183) | def shift(img, dx, dy):
function rotate (line 189) | def rotate(img, angle, center=CENTER):
function scale (line 195) | def scale(img, tx, ty, center=CENTER):
function layer_add (line 201) | def layer_add(lower_layer_np, higher_layer_np):
function draw_triangle (line 210) | def draw_triangle(img, pts, color, width):
function draw_square (line 222) | def draw_square(img, pt1, pt2, color, width):
function draw_pentagon (line 246) | def draw_pentagon(img, pts, color, width):
function draw_hexagon (line 258) | def draw_hexagon(img, pts, color, width):
function draw_circle (line 270) | def draw_circle(img, center, radius, color, width):
FILE: src/dataset/sampling.py
function sample_rules (line 11) | def sample_rules():
function sample_attr_avail (line 26) | def sample_attr_avail(rule_groups, row_3_3):
function sample_attr (line 91) | def sample_attr(attrs_list):
FILE: src/dataset/serialize.py
function n_tree_serialize (line 13) | def n_tree_serialize(aot):
function serialize_aot (line 28) | def serialize_aot(aot):
function serialize_rules (line 44) | def serialize_rules(rule_groups):
function dom_problem (line 77) | def dom_problem(instances, rule_groups):
FILE: src/dataset/solver.py
function solve (line 7) | def solve(rule_groups, context, candidates):
function check_num_pos (line 40) | def check_num_pos(rule_num_pos, context, candidate):
function check_consistency (line 116) | def check_consistency(candidate, attr, component_idx):
function check_entity (line 129) | def check_entity(rule, context, candidate, attr, regenerate):
FILE: src/model/basic_model.py
class BasicModel (line 5) | class BasicModel(nn.Module):
method __init__ (line 6) | def __init__(self, args):
method load_model (line 10) | def load_model(self, path, epoch):
method save_model (line 14) | def save_model(self, path, epoch, acc, loss):
method compute_loss (line 17) | def compute_loss(self, output, target, meta_target, meta_structure):
method train_ (line 20) | def train_(self, image, target, meta_target, meta_structure, embedding...
method validate_ (line 31) | def validate_(self, image, target, meta_target, meta_structure, embedd...
method test_ (line 40) | def test_(self, image, target, meta_target, meta_structure, embedding,...
FILE: src/model/cnn_lstm.py
class conv_module (line 11) | class conv_module(nn.Module):
method __init__ (line 12) | def __init__(self):
method forward (line 27) | def forward(self, x):
class lstm_module (line 38) | class lstm_module(nn.Module):
method __init__ (line 39) | def __init__(self):
method forward (line 45) | def forward(self, x):
class CNN_LSTM (line 51) | class CNN_LSTM(BasicModel):
method __init__ (line 52) | def __init__(self, args):
method compute_loss (line 59) | def compute_loss(self, output, target, meta_target, meta_structure):
method forward (line 64) | def forward(self, x, embedding, indicator):
FILE: src/model/cnn_mlp.py
class conv_module (line 11) | class conv_module(nn.Module):
method __init__ (line 12) | def __init__(self):
method forward (line 27) | def forward(self, x):
class mlp_module (line 38) | class mlp_module(nn.Module):
method __init__ (line 39) | def __init__(self):
method forward (line 46) | def forward(self, x):
class CNN_MLP (line 52) | class CNN_MLP(BasicModel):
method __init__ (line 53) | def __init__(self, args):
method compute_loss (line 60) | def compute_loss(self, output, target, meta_target, meta_structure):
method forward (line 65) | def forward(self, x, embedding, indicator):
FILE: src/model/fc_tree_net.py
class FCTreeNet (line 7) | class FCTreeNet(torch.nn.Module):
method __init__ (line 8) | def __init__(self, in_dim=300, img_dim=256, use_cuda=True):
method forward (line 27) | def forward(self, image_feature, input, indicator):
FILE: src/model/main.py
function train (line 65) | def train(epoch):
function validate (line 89) | def validate(epoch):
function test (line 114) | def test(epoch):
function main (line 136) | def main():
FILE: src/model/resnet18.py
class identity (line 12) | class identity(nn.Module):
method __init__ (line 13) | def __init__(self):
method forward (line 16) | def forward(self, x):
class mlp_module (line 19) | class mlp_module(nn.Module):
method __init__ (line 20) | def __init__(self):
method forward (line 27) | def forward(self, x):
class Resnet18_MLP (line 33) | class Resnet18_MLP(BasicModel):
method __init__ (line 34) | def __init__(self, args):
method compute_loss (line 45) | def compute_loss(self, output, target, meta_target, meta_structure):
method forward (line 63) | def forward(self, x, embedding, indicator):
FILE: src/model/utility/dataset_utility.py
class ToTensor (line 11) | class ToTensor(object):
method __call__ (line 12) | def __call__(self, sample):
class dataset (line 15) | class dataset(Dataset):
method __init__ (line 16) | def __init__(self, root_dir, dataset_type, img_size, transform=None, s...
method __len__ (line 25) | def __len__(self):
method __getitem__ (line 28) | def __getitem__(self, idx):
Condensed preview — 30 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (204K chars).
[
{
"path": ".gitignore",
"chars": 1248,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE",
"chars": 35149,
"preview": " GNU GENERAL PUBLIC LICENSE\n Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
},
{
"path": "README.md",
"chars": 5887,
"preview": "# RAVEN\n\nThis repo contains code for our CVPR 2019 paper.\n\n[RAVEN: A Dataset for <u>R</u>elational and <u>A</u>nalogical"
},
{
"path": "assets/README.md",
"chars": 3509,
"preview": "# Dataset Format\n\nThe dataset folder is organized as follows:\n\n```\ncenter_single/\n RAVEN_0_train.npz\n RAVEN_0_trai"
},
{
"path": "requirements.txt",
"chars": 87,
"preview": "numpy\nscipy\nmatplotlib\npillow\nscikit-image\nopencv-contrib-python\ntqdm\ntorch\ntorchvision"
},
{
"path": "src/dataset/AoT.py",
"chars": 18967,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport copy\n\nimport numpy as np\nfrom scipy.misc import comb\n\nfrom Attribute import Angle, Colo"
},
{
"path": "src/dataset/Attribute.py",
"chars": 12574,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport numpy as np\n\nfrom const import (ANGLE_MAX, ANGLE_MIN, ANGLE_VALUES, COLOR_MAX, COLOR_MI"
},
{
"path": "src/dataset/Rule.py",
"chars": 26912,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport copy\n\nimport numpy as np\n\nfrom const import COLOR_MAX, COLOR_MIN\n\n\ndef Rule_Wrapper(nam"
},
{
"path": "src/dataset/__init__.py",
"chars": 101,
"preview": "\"\"\" RAVEN dataset generation code\n\nAuthor: Chi Zhang\nData: 05/14/2019\nContact: chi.zhang@ucla.edu\n\"\"\""
},
{
"path": "src/dataset/api.py",
"chars": 4819,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport xml.etree.ElementTree as ET\n\nimport cv2\nimport numpy as np\nfrom const import DEFAULT_WI"
},
{
"path": "src/dataset/build_tree.py",
"chars": 8377,
"preview": "# -*- coding: utf-8 -*-\n\n\nfrom AoT import Component, Layout, Root, Structure\nfrom constraints import (gen_entity_constra"
},
{
"path": "src/dataset/const.py",
"chars": 3744,
"preview": "# -*- coding: utf-8 -*-\n\n\n# Maximum number of components in a RPM\nMAX_COMPONENTS = 2\n\n# Canvas parameters\nIMAGE_SIZE = 1"
},
{
"path": "src/dataset/constraints.py",
"chars": 5913,
"preview": "# -*- coding: utf-8 -*-\n\n\nfrom const import (ANGLE_MAX, ANGLE_MIN, COLOR_MAX, COLOR_MIN, NUM_MAX,\n NUM"
},
{
"path": "src/dataset/main.py",
"chars": 15017,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport argparse\nimport copy\nimport os\nimport random\nimport sys\n\nimport numpy as np\nfrom tqdm i"
},
{
"path": "src/dataset/rendering.py",
"chars": 10184,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport cv2\nimport numpy as np\nfrom PIL import Image\n\nfrom AoT import Root\nfrom const import CE"
},
{
"path": "src/dataset/sampling.py",
"chars": 4499,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport numpy as np\nfrom scipy.misc import comb\n\nfrom const import MAX_COMPONENTS, RULE_ATTR\nfr"
},
{
"path": "src/dataset/serialize.py",
"chars": 4565,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport json\nimport xml.etree.ElementTree as ET\n\nimport numpy as np\n\nfrom const import META_STR"
},
{
"path": "src/dataset/solver.py",
"chars": 9937,
"preview": "# -*- coding: utf-8 -*-\n\n\nimport numpy as np\n\n\ndef solve(rule_groups, context, candidates):\n \"\"\"Search-based Heuristi"
},
{
"path": "src/model/__init__.py",
"chars": 95,
"preview": "\"\"\" RAVEN benchmarking code\n\nAuthor: Chi Zhang\nData: 05/14/2019\nContact: chi.zhang@ucla.edu\n\"\"\""
},
{
"path": "src/model/basic_model.py",
"chars": 1939,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass BasicModel(nn.Module):\n def __init__(self, "
},
{
"path": "src/model/cnn_lstm.py",
"chars": 2493,
"preview": "import numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\n\nfrom"
},
{
"path": "src/model/cnn_mlp.py",
"chars": 2467,
"preview": "import numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\n\nfrom"
},
{
"path": "src/model/const/__init__.py",
"chars": 20,
"preview": "from const import *\n"
},
{
"path": "src/model/const/const.py",
"chars": 3693,
"preview": "# -*- coding: utf-8 -*-\n\n\n# Maximum number of components in a RPM\nMAX_COMPONENTS = 2\n\n# Canvas parameters\nIMAGE_SIZE = 1"
},
{
"path": "src/model/fc_tree_net.py",
"chars": 3565,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\n\nclass FCTreeNet(torch.nn.Module)"
},
{
"path": "src/model/main.py",
"chars": 5355,
"preview": "import os\nimport numpy as np\nimport argparse\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch."
},
{
"path": "src/model/resnet18.py",
"chars": 2837,
"preview": "import numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimpor"
},
{
"path": "src/model/utility/__init__.py",
"chars": 29,
"preview": "from dataset_utility import *"
},
{
"path": "src/model/utility/dataset_utility.py",
"chars": 2932,
"preview": "import os\nimport glob\nimport numpy as np\nfrom scipy import misc\n\nimport torch\nfrom torch.utils.data import Dataset, Data"
}
]
// ... and 1 more files (download for full content)
About this extraction
This page contains the full source code of the WellyZhang/RAVEN GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 30 files (192.3 KB), approximately 46.1k tokens, and a symbol index with 212 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.