Repository: yz93/LAVT-RIS
Branch: main
Commit: 1da0af9f21b6
Files: 59
Total size: 1.1 MB
Directory structure:
gitextract_240sk8kr/
├── LICENSE
├── README.md
├── args.py
├── bert/
│ ├── activations.py
│ ├── configuration_bert.py
│ ├── configuration_utils.py
│ ├── file_utils.py
│ ├── generation_utils.py
│ ├── modeling_bert.py
│ ├── modeling_utils.py
│ ├── tokenization_bert.py
│ ├── tokenization_utils.py
│ └── tokenization_utils_base.py
├── data/
│ └── dataset_refer_bert.py
├── demo_inference.py
├── lib/
│ ├── _utils.py
│ ├── backbone.py
│ ├── mask_predictor.py
│ ├── mmcv_custom/
│ │ ├── __init__.py
│ │ └── checkpoint.py
│ └── segmentation.py
├── refer/
│ ├── LICENSE
│ ├── Makefile
│ ├── README.md
│ ├── data/
│ │ └── README.md
│ ├── evaluation/
│ │ ├── __init__.py
│ │ ├── bleu/
│ │ │ ├── LICENSE
│ │ │ ├── __init__.py
│ │ │ ├── bleu.py
│ │ │ └── bleu_scorer.py
│ │ ├── cider/
│ │ │ ├── __init__.py
│ │ │ ├── cider.py
│ │ │ └── cider_scorer.py
│ │ ├── meteor/
│ │ │ ├── __init__.py
│ │ │ └── meteor.py
│ │ ├── readme.txt
│ │ ├── refEvaluation.py
│ │ ├── rouge/
│ │ │ ├── __init__.py
│ │ │ └── rouge.py
│ │ └── tokenizer/
│ │ ├── __init__.py
│ │ ├── ptbtokenizer.py
│ │ └── stanford-corenlp-3.4.1.jar
│ ├── external/
│ │ ├── README.md
│ │ ├── __init__.py
│ │ ├── _mask.pyx
│ │ ├── mask.py
│ │ ├── maskApi.c
│ │ └── maskApi.h
│ ├── pyEvalDemo.ipynb
│ ├── pyReferDemo.ipynb
│ ├── refer.py
│ ├── setup.py
│ └── test/
│ ├── sample_expressions_testA.json
│ └── sample_expressions_testB.json
├── requirements.txt
├── test.py
├── train.py
├── transforms.py
└── utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
================================================
FILE: README.md
================================================
# LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
Welcome to the official repository for the method presented in
"LAVT: Language-Aware Vision Transformer for Referring Image Segmentation."

Code in this repository is written using [PyTorch](https://pytorch.org/) and is organized in the following way (assuming the working directory is the root directory of this repository):
* `./lib` contains files implementing the main network.
* Inside `./lib`, `_utils.py` defines the highest-level model, which incorporates the backbone network
defined in `backbone.py` and the simple mask decoder defined in `mask_predictor.py`.
`segmentation.py` provides the model interface and initialization functions.
* `./bert` contains files migrated from [Hugging Face Transformers v3.0.2](https://huggingface.co/transformers/v3.0.2/quicktour.html),
which implement the BERT language model.
We used Transformers v3.0.2 during development but it had a bug that would appear when using `DistributedDataParallel`.
Therefore we maintain a copy of the relevant source files in this repository.
This way, the bug is fixed and code in this repository is self-contained.
* `./train.py` is invoked to train the model.
* `./test.py` is invoked to run inference on the evaluation subsets after training.
* `./refer` contains data pre-processing code and is also where data should be placed, including the images and all annotations.
It is cloned from [refer](https://github.com/lichengunc/refer).
* `./data/dataset_refer_bert.py` is where the dataset class is defined.
* `./utils.py` defines functions that track training statistics and setup
functions for `DistributedDataParallel`.
## Updates
**April 13th, 2023**. Using the Dice loss instead of the cross-entropy loss can improve results. Will add code and release weights later when get a chance.
**June 21st, 2022**. Uploaded the training logs and trained
model weights of lavt_one.
**June 9th, 2022**.
Added a more efficient implementation of LAVT.
* To train this new model, specify `--model` as `lavt_one`
(and `lavt` is still valid for specifying the old model).
The rest of the configuration stays unchanged.
* The difference between this version and the previous one
is that the language model has been moved inside the overall model,
so that `DistributedDataParallel` needs to be applied only once.
Applying it twice (on the standalone language model and the main branch)
as done in the old implementation led to low GPU utility,
which slowed down training.
We recommend training this model on 8 GPUs
(and same as before with batch size 32).
## Setting Up
### Preliminaries
The code has been verified to work with PyTorch v1.7.1 and Python 3.7.
1. Clone this repository.
2. Change directory to root of this repository.
### Package Dependencies
1. Create a new Conda environment with Python 3.7 then activate it:
```shell
conda create -n lavt python==3.7
conda activate lavt
```
2. Install PyTorch v1.7.1 with a CUDA version that works on your cluster/machine (CUDA 10.2 is used in this example):
```shell
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
```
3. Install the packages in `requirements.txt` via `pip`:
```shell
pip install -r requirements.txt
```
### Datasets
1. Follow instructions in the `./refer` directory to set up subdirectories
and download annotations.
This directory is a git clone (minus two data files that we do not need)
from the [refer](https://github.com/lichengunc/refer) public API.
2. Download images from [COCO](https://cocodataset.org/#download).
Please use the first downloading link *2014 Train images [83K/13GB]*, and extract
the downloaded `train_2014.zip` file to `./refer/data/images/mscoco/images`.
### The Initialization Weights for Training
1. Create the `./pretrained_weights` directory where we will be storing the weights.
```shell
mkdir ./pretrained_weights
```
2. Download [pre-trained classification weights of
the Swin Transformer](https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth),
and put the `pth` file in `./pretrained_weights`.
These weights are needed for training to initialize the model.
### Trained Weights of LAVT for Testing
1. Create the `./checkpoints` directory where we will be storing the weights.
```shell
mkdir ./checkpoints
```
2. Download LAVT model weights (which are stored on Google Drive) using links below and put them in `./checkpoints`.
| [RefCOCO](https://drive.google.com/file/d/13D-OeEOijV8KTC3BkFP-gOJymc6DLwVT/view?usp=sharing) | [RefCOCO+](https://drive.google.com/file/d/1B8Q44ZWsc8Pva2xD_M-KFh7-LgzeH2-2/view?usp=sharing) | [G-Ref (UMD)](https://drive.google.com/file/d/1BjUnPVpALurkGl7RXXvQiAHhA-gQYKvK/view?usp=sharing) | [G-Ref (Google)](https://drive.google.com/file/d/1weiw5UjbPfo3tCBPfB8tu6xFXCUG16yS/view?usp=sharing) |
|---|---|---|---|
3. Model weights and training logs of the new lavt_one implementation are below.
| RefCOCO | RefCOCO+ | G-Ref (UMD) | G-Ref (Google) |
|:-----:|:-----:|:-----:|:-----:|
|[log](https://drive.google.com/file/d/1YIojIHqe3bxxsWOltifa2U9jH67hPHLM/view?usp=sharing) | [weights](https://drive.google.com/file/d/1xFMEXr6AGU97Ypj1yr8oo00uObbeIQvJ/view?usp=sharing)|[log](https://drive.google.com/file/d/1Z34T4gEnWlvcSUQya7txOuM0zdLK7MRT/view?usp=sharing) | [weights](https://drive.google.com/file/d/1HS8ZnGaiPJr-OmoUn4-4LVnVtD_zHY6w/view?usp=sharing)|[log](https://drive.google.com/file/d/14VAgahngOV8NA6noLZCqDoqaUrlW14v8/view?usp=sharing) | [weights](https://drive.google.com/file/d/14g8NzgZn6HzC6tP_bsQuWmh5LnOcovsE/view?usp=sharing)|[log](https://drive.google.com/file/d/1JBXfmlwemWSvs92Rky0TlHcVuuLpt4Da/view?usp=sharing) | [weights](https://drive.google.com/file/d/1IJeahFVLgKxu_BVmWacZs3oUzgTCeWcz/view?usp=sharing)|
* The Prec@K, overall IoU and mean IoU numbers in the training logs will differ
from the final results obtained by running `test.py`,
because only one out of multiple annotated expressions is
randomly selected and evaluated for each object during training.
But these numbers give a good idea about the test performance.
The two should be fairly close.
## Training
We use `DistributedDataParallel` from PyTorch.
The released `lavt` weights were trained using 4 x 32G V100 cards (max mem on each card was about 26G).
The released `lavt_one` weights were trained using 8 x 32G V100 cards (max mem on each card was about 13G).
Using more cards was to accelerate training.
To run on 4 GPUs (with IDs 0, 1, 2, and 3) on a single node:
```shell
mkdir ./models
mkdir ./models/refcoco
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco --model_id refcoco --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco/output
mkdir ./models/refcoco+
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco+ --model_id refcoco+ --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco+/output
mkdir ./models/gref_umd
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy umd --model_id gref_umd --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_umd/output
mkdir ./models/gref_google
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy google --model_id gref_google --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_google/output
```
* *--model* is a pre-defined model name. Options include `lavt` and `lavt_one`. See [Updates](#updates).
* *--dataset* is the dataset name. One can choose from `refcoco`, `refcoco+`, and `refcocog`.
* *--splitBy* needs to be specified if and only if the dataset is G-Ref (which is also called RefCOCOg).
`umd` identifies the UMD partition and `google` identifies the Google partition.
* *--model_id* is the model name one should define oneself (*e.g.*, customize it to contain training/model configurations, dataset information, experiment IDs, *etc*.).
It is used in two ways: Training log will be saved as `./models/[args.model_id]/output` and the best checkpoint will be saved as `./checkpoints/model_best_[args.model_id].pth`.
* *--swin_type* specifies the version of the Swin Transformer.
One can choose from `tiny`, `small`, `base`, and `large`. The default is `base`.
* *--pretrained_swin_weights* specifies the path to pre-trained Swin Transformer weights used for model initialization.
* Note that currently we need to manually create the `./models/[args.model_id]` directory via `mkdir` before running `train.py`.
This is because we use `tee` to redirect `stdout` and `stderr` to `./models/[args.model_id]/output` for logging.
This is a nuisance and should be resolved in the future, *i.e.*, using a proper logger or a bash script for initiating training.
## Testing
For RefCOCO/RefCOCO+, run one of
```shell
python test.py --model lavt --swin_type base --dataset refcoco --split val --resume ./checkpoints/refcoco.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
python test.py --model lavt --swin_type base --dataset refcoco+ --split val --resume ./checkpoints/refcoco+.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
```
* *--split* is the subset to evaluate, and one can choose from `val`, `testA`, and `testB`.
* *--resume* is the path to the weights of a trained model.
For G-Ref (UMD)/G-Ref (Google), run one of
```shell
python test.py --model lavt --swin_type base --dataset refcocog --splitBy umd --split val --resume ./checkpoints/gref_umd.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
python test.py --model lavt --swin_type base --dataset refcocog --splitBy google --split val --resume ./checkpoints/gref_google.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
```
* *--splitBy* specifies the partition to evaluate.
One can choose from `umd` or `google`.
* *--split* is the subset (according to the specified partition) to evaluate, and one can choose from `val` and `test` for the UMD partition, and only `val` for the Google partition..
* *--resume* is the path to the weights of a trained model.
## Results
1. The evaluation results (those reported in the paper) of LAVT trained with a cross-entropy loss and based on our original implementation are summarized as follows:
| Dataset | P@0.5 | P@0.6 | P@0.7 | P@0.8 | P@0.9 | Overall IoU | Mean IoU |
|:---------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----------:|:--------:|
| RefCOCO val | 84.46 | 80.90 | 75.28 | 64.71 | 34.30 | 72.73 | 74.46 |
| RefCOCO test A | 88.07 | 85.17 | 79.90 | 68.52 | 35.69 | 75.82 | 76.89 |
| RefCOCO test B | 79.12 | 74.94 | 69.17 | 59.37 | 34.45 | 68.79 | 70.94 |
| RefCOCO+ val | 74.44 | 70.91 | 65.58 | 56.34 | 30.23 | 62.14 | 65.81 |
| RefCOCO+ test A | 80.68 | 77.96 | 72.90 | 62.21 | 32.36 | 68.38 | 70.97 |
| RefCOCO+ test B | 65.66 | 61.85 | 55.94 | 47.56 | 27.24 | 55.10 | 59.23 |
| G-Ref val (UMD) | 70.81 | 65.28 | 58.60 | 47.49 | 22.73 | 61.24 | 63.34 |
| G-Ref test (UMD)| 71.54 | 66.38 | 59.00 | 48.21 | 23.10 | 62.09 | 63.62 |
|G-Ref val (Goog.)| 71.16 | 67.21 | 61.76 | 51.98 | 27.30 | 60.50 | 63.66 |
- We have validated LAVT on RefCOCO with multiple runs. The overall IoU on the val set generally lies in the range of 72.73±0.5%.
2. In the following, we report the results of LAVT trained with a multi-class Dice loss and based on the new implementation (`lavt_one`).
| Dataset | P@0.5 | P@0.6 | P@0.7 | P@0.8 | P@0.9 | Overall IoU | Mean IoU |
|:---------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----------:|:--------:|
| RefCOCO val | 85.87 | 82.13 | 76.64 | 65.45 | 35.30 | 73.50 | 75.41 |
| RefCOCO test A | 88.47 | 85.63 | 80.57 | 68.84 | 35.71 | 75.97 | 77.31 |
| RefCOCO test B | 80.20 | 76.49 | 70.34 | 60.12 | 34.94 | 69.33 | 71.86 |
| RefCOCO+ val | 76.19 | 72.27 | 66.82 | 56.87 | 30.15 | 63.79 | 67.65 |
| RefCOCO+ test A | 82.50 | 79.44 | 74.00 | 63.27 | 31.99 | 69.79 | 72.53 |
| RefCOCO+ test B | 68.03 | 63.35 | 57.29 | 47.92 | 26.98 | 56.49 | 61.22 |
| G-Ref val (UMD) | 75.82 | 71.06 | 63.99 | 52.98 | 27.31 | 64.02 | 67.41 |
| G-Ref test (UMD)| 76.12 | 71.13 | 64.58 | 53.62 | 28.03 | 64.49 | 67.45 |
|G-Ref val (Goog.)| 72.57 | 68.65 | 63.09 | 53.33 | 28.14 | 61.31 | 64.84 |
## Demo: Try LAVT on Your Own Image-Text Pairs
You can run inference on any image-text pair
and visualize the result by running the script `./demo_inference.py`.
Have fun!
## Citing LAVT
```
@inproceedings{yang2022lavt,
title={LAVT: Language-Aware Vision Transformer for Referring Image Segmentation},
author={Yang, Zhao and Wang, Jiaqi and Tang, Yansong and Chen, Kai and Zhao, Hengshuang and Torr, Philip HS},
booktitle={CVPR},
year={2022}
}
```
## Contributing
We appreciate all contributions.
It helps the project if you could
- report issues you are facing,
- give a :+1: on issues reported by others that are relevant to you,
- answer issues reported by others for which you have found solutions,
- and implement helpful new features or improve the code otherwise with pull requests.
## Acknowledgements
Code in this repository is built upon several public repositories.
Specifically,
* data pre-processing leverages the [refer](https://github.com/lichengunc/refer) repository,
* the backbone model is implemented based on code from [Swin Transformer for Semantic Segmentation](https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation),
* the training and testing pipelines are adapted from [RefVOS](https://github.com/miriambellver/refvos),
* and implementation of the BERT model (files in the bert directory) is from [Hugging Face Transformers v3.0.2](https://github.com/huggingface/transformers/tree/v3.0.2)
(we migrated over the relevant code to fix a bug and simplify the installation process).
Some of these repositories in turn adapt code from [OpenMMLab](https://github.com/open-mmlab) and [TorchVision](https://github.com/pytorch/vision).
We'd like to thank the authors/organizations of these repositories for open sourcing their projects.
## License
GNU GPLv3
================================================
FILE: args.py
================================================
import argparse
def get_parser():
parser = argparse.ArgumentParser(description='LAVT training and testing')
parser.add_argument('--amsgrad', action='store_true',
help='if true, set amsgrad to True in an Adam or AdamW optimizer.')
parser.add_argument('-b', '--batch-size', default=8, type=int)
parser.add_argument('--bert_tokenizer', default='bert-base-uncased', help='BERT tokenizer')
parser.add_argument('--ck_bert', default='bert-base-uncased', help='pre-trained BERT weights')
parser.add_argument('--dataset', default='refcoco', help='refcoco, refcoco+, or refcocog')
parser.add_argument('--ddp_trained_weights', action='store_true',
help='Only needs specified when testing,'
'whether the weights to be loaded are from a DDP-trained model')
parser.add_argument('--device', default='cuda:0', help='device') # only used when testing on a single machine
parser.add_argument('--epochs', default=40, type=int, metavar='N', help='number of total epochs to run')
parser.add_argument('--fusion_drop', default=0.0, type=float, help='dropout rate for PWAMs')
parser.add_argument('--img_size', default=480, type=int, help='input image size')
parser.add_argument("--local_rank", type=int, help='local rank for DistributedDataParallel')
parser.add_argument('--lr', default=0.00005, type=float, help='the initial learning rate')
parser.add_argument('--mha', default='', help='If specified, should be in the format of a-b-c-d, e.g., 4-4-4-4,'
'where a, b, c, and d refer to the numbers of heads in stage-1,'
'stage-2, stage-3, and stage-4 PWAMs')
parser.add_argument('--model', default='lavt', help='model: lavt, lavt_one')
parser.add_argument('--model_id', default='lavt', help='name to identify the model')
parser.add_argument('--output-dir', default='./checkpoints/', help='path where to save checkpoint weights')
parser.add_argument('--pin_mem', action='store_true',
help='If true, pin memory when using the data loader.')
parser.add_argument('--pretrained_swin_weights', default='',
help='path to pre-trained Swin backbone weights')
parser.add_argument('--print-freq', default=10, type=int, help='print frequency')
parser.add_argument('--refer_data_root', default='./refer/data/', help='REFER dataset root directory')
parser.add_argument('--resume', default='', help='resume from checkpoint')
parser.add_argument('--split', default='test', help='only used when testing')
parser.add_argument('--splitBy', default='unc', help='change to umd or google when the dataset is G-Ref (RefCOCOg)')
parser.add_argument('--swin_type', default='base',
help='tiny, small, base, or large variants of the Swin Transformer')
parser.add_argument('--wd', '--weight-decay', default=1e-2, type=float, metavar='W', help='weight decay',
dest='weight_decay')
parser.add_argument('--window12', action='store_true',
help='only needs specified when testing,'
'when training, window size is inferred from pre-trained weights file name'
'(containing \'window12\'). Initialize Swin with window size 12 instead of the default 7.')
parser.add_argument('-j', '--workers', default=8, type=int, metavar='N', help='number of data loading workers')
return parser
if __name__ == "__main__":
parser = get_parser()
args_dict = parser.parse_args()
================================================
FILE: bert/activations.py
================================================
import logging
import math
import torch
import torch.nn.functional as F
logger = logging.getLogger(__name__)
def swish(x):
return x * torch.sigmoid(x)
def _gelu_python(x):
""" Original Implementation of the gelu activation function in Google Bert repo when initially created.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
This is now written in C in torch.nn.functional
Also see https://arxiv.org/abs/1606.08415
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
def gelu_new(x):
""" Implementation of the gelu activation function currently in Google Bert repo (identical to OpenAI GPT).
Also see https://arxiv.org/abs/1606.08415
"""
return 0.5 * x * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0))))
if torch.__version__ < "1.4.0":
gelu = _gelu_python
else:
gelu = F.gelu
def gelu_fast(x):
return 0.5 * x * (1.0 + torch.tanh(x * 0.7978845608 * (1.0 + 0.044715 * x * x)))
ACT2FN = {
"relu": F.relu,
"swish": swish,
"gelu": gelu,
"tanh": torch.tanh,
"gelu_new": gelu_new,
"gelu_fast": gelu_fast,
}
def get_activation(activation_string):
if activation_string in ACT2FN:
return ACT2FN[activation_string]
else:
raise KeyError("function {} not found in ACT2FN mapping {}".format(activation_string, list(ACT2FN.keys())))
================================================
FILE: bert/configuration_bert.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" BERT model configuration """
import logging
from .configuration_utils import PretrainedConfig
logger = logging.getLogger(__name__)
BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json",
"bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json",
"bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json",
"bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json",
"bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json",
"bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json",
"bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json",
"bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json",
"bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json",
"bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json",
"bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json",
"bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json",
"bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json",
"bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json",
"bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json",
"cl-tohoku/bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese/config.json",
"cl-tohoku/bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking/config.json",
"cl-tohoku/bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char/config.json",
"cl-tohoku/bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking/config.json",
"TurkuNLP/bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/config.json",
"TurkuNLP/bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/config.json",
"wietsedv/bert-base-dutch-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/config.json",
# See all BERT models at https://huggingface.co/models?filter=bert
}
class BertConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a :class:`~transformers.BertModel`.
It is used to instantiate an BERT model according to the specified arguments, defining the model
architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
the BERT `bert-base-uncased `__ architecture.
Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used
to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig`
for more information.
Args:
vocab_size (:obj:`int`, optional, defaults to 30522):
Vocabulary size of the BERT model. Defines the different tokens that
can be represented by the `inputs_ids` passed to the forward method of :class:`~transformers.BertModel`.
hidden_size (:obj:`int`, optional, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (:obj:`int`, optional, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (:obj:`int`, optional, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (:obj:`int`, optional, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (:obj:`str` or :obj:`function`, optional, defaults to "gelu"):
The non-linear activation function (function or string) in the encoder and pooler.
If string, "gelu", "relu", "swish" and "gelu_new" are supported.
hidden_dropout_prob (:obj:`float`, optional, defaults to 0.1):
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (:obj:`float`, optional, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (:obj:`int`, optional, defaults to 512):
The maximum sequence length that this model might ever be used with.
Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (:obj:`int`, optional, defaults to 2):
The vocabulary size of the `token_type_ids` passed into :class:`~transformers.BertModel`.
initializer_range (:obj:`float`, optional, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (:obj:`float`, optional, defaults to 1e-12):
The epsilon used by the layer normalization layers.
gradient_checkpointing (:obj:`bool`, optional, defaults to False):
If True, use gradient checkpointing to save memory at the expense of slower backward pass.
Example::
>>> from transformers import BertModel, BertConfig
>>> # Initializing a BERT bert-base-uncased style configuration
>>> configuration = BertConfig()
>>> # Initializing a model from the bert-base-uncased style configuration
>>> model = BertModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
"""
model_type = "bert"
def __init__(
self,
vocab_size=30522,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=2,
initializer_range=0.02,
layer_norm_eps=1e-12,
pad_token_id=0,
gradient_checkpointing=False,
**kwargs
):
super().__init__(pad_token_id=pad_token_id, **kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.intermediate_size = intermediate_size
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.gradient_checkpointing = gradient_checkpointing
================================================
FILE: bert/configuration_utils.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Configuration base class and utilities."""
import copy
import json
import logging
import os
from typing import Dict, Tuple
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
logger = logging.getLogger(__name__)
class PretrainedConfig(object):
r""" Base class for all configuration classes.
Handles a few parameters common to all models' configurations as well as methods for loading/downloading/saving configurations.
Note:
A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does **not** load the model weights.
It only affects the model's configuration.
Class attributes (overridden by derived classes):
- ``model_type``: a string that identifies the model type, that we serialize into the JSON file, and that we use to recreate the correct object in :class:`~transformers.AutoConfig`.
Args:
finetuning_task (:obj:`string` or :obj:`None`, `optional`, defaults to :obj:`None`):
Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint.
num_labels (:obj:`int`, `optional`, defaults to `2`):
Number of classes to use when the model is a classification model (sequences/tokens)
output_hidden_states (:obj:`bool`, `optional`, defaults to :obj:`False`):
Should the model returns all hidden-states.
output_attentions (:obj:`bool`, `optional`, defaults to :obj:`False`):
Should the model returns all attentions.
torchscript (:obj:`bool`, `optional`, defaults to :obj:`False`):
Is the model used with Torchscript (for PyTorch models).
"""
model_type: str = ""
def __init__(self, **kwargs):
# Attributes with defaults
self.output_hidden_states = kwargs.pop("output_hidden_states", False)
self.output_attentions = kwargs.pop("output_attentions", False)
self.use_cache = kwargs.pop("use_cache", True) # Not used by all models
self.torchscript = kwargs.pop("torchscript", False) # Only used by PyTorch models
self.use_bfloat16 = kwargs.pop("use_bfloat16", False)
self.pruned_heads = kwargs.pop("pruned_heads", {})
# Is decoder is used in encoder-decoder models to differentiate encoder from decoder
self.is_encoder_decoder = kwargs.pop("is_encoder_decoder", False)
self.is_decoder = kwargs.pop("is_decoder", False)
# Parameters for sequence generation
self.max_length = kwargs.pop("max_length", 20)
self.min_length = kwargs.pop("min_length", 0)
self.do_sample = kwargs.pop("do_sample", False)
self.early_stopping = kwargs.pop("early_stopping", False)
self.num_beams = kwargs.pop("num_beams", 1)
self.temperature = kwargs.pop("temperature", 1.0)
self.top_k = kwargs.pop("top_k", 50)
self.top_p = kwargs.pop("top_p", 1.0)
self.repetition_penalty = kwargs.pop("repetition_penalty", 1.0)
self.length_penalty = kwargs.pop("length_penalty", 1.0)
self.no_repeat_ngram_size = kwargs.pop("no_repeat_ngram_size", 0)
self.bad_words_ids = kwargs.pop("bad_words_ids", None)
self.num_return_sequences = kwargs.pop("num_return_sequences", 1)
# Fine-tuning task arguments
self.architectures = kwargs.pop("architectures", None)
self.finetuning_task = kwargs.pop("finetuning_task", None)
self.id2label = kwargs.pop("id2label", None)
self.label2id = kwargs.pop("label2id", None)
if self.id2label is not None:
kwargs.pop("num_labels", None)
self.id2label = dict((int(key), value) for key, value in self.id2label.items())
# Keys are always strings in JSON so convert ids to int here.
else:
self.num_labels = kwargs.pop("num_labels", 2)
# Tokenizer arguments TODO: eventually tokenizer and models should share the same config
self.prefix = kwargs.pop("prefix", None)
self.bos_token_id = kwargs.pop("bos_token_id", None)
self.pad_token_id = kwargs.pop("pad_token_id", None)
self.eos_token_id = kwargs.pop("eos_token_id", None)
self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None)
# task specific arguments
self.task_specific_params = kwargs.pop("task_specific_params", None)
# TPU arguments
self.xla_device = kwargs.pop("xla_device", None)
# Additional attributes without default values
for key, value in kwargs.items():
try:
setattr(self, key, value)
except AttributeError as err:
logger.error("Can't set {} with value {} for {}".format(key, value, self))
raise err
@property
def num_labels(self):
return len(self.id2label)
@num_labels.setter
def num_labels(self, num_labels):
self.id2label = {i: "LABEL_{}".format(i) for i in range(num_labels)}
self.label2id = dict(zip(self.id2label.values(), self.id2label.keys()))
def save_pretrained(self, save_directory):
"""
Save a configuration object to the directory `save_directory`, so that it
can be re-loaded using the :func:`~transformers.PretrainedConfig.from_pretrained` class method.
Args:
save_directory (:obj:`string`):
Directory where the configuration JSON file will be saved.
"""
if os.path.isfile(save_directory):
raise AssertionError("Provided path ({}) should be a directory, not a file".format(save_directory))
os.makedirs(save_directory, exist_ok=True)
# If we save using the predefined names, we can load using `from_pretrained`
output_config_file = os.path.join(save_directory, CONFIG_NAME)
self.to_json_file(output_config_file, use_diff=True)
logger.info("Configuration saved in {}".format(output_config_file))
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs) -> "PretrainedConfig":
r"""
Instantiate a :class:`~transformers.PretrainedConfig` (or a derived class) from a pre-trained model configuration.
Args:
pretrained_model_name_or_path (:obj:`string`):
either:
- a string with the `shortcut name` of a pre-trained model configuration to load from cache or
download, e.g.: ``bert-base-uncased``.
- a string with the `identifier name` of a pre-trained model configuration that was user-uploaded to
our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- a path to a `directory` containing a configuration file saved using the
:func:`~transformers.PretrainedConfig.save_pretrained` method, e.g.: ``./my_model_directory/``.
- a path or url to a saved configuration JSON `file`, e.g.:
``./my_model_directory/configuration.json``.
cache_dir (:obj:`string`, `optional`):
Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used.
kwargs (:obj:`Dict[str, any]`, `optional`):
The values in kwargs of any keys which are configuration attributes will be used to override the loaded
values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is
controlled by the `return_unused_kwargs` keyword parameter.
force_download (:obj:`bool`, `optional`, defaults to :obj:`False`):
Force to (re-)download the model weights and configuration files and override the cached versions if they exist.
resume_download (:obj:`bool`, `optional`, defaults to :obj:`False`):
Do not delete incompletely recieved file. Attempt to resume the download if such a file exists.
proxies (:obj:`Dict`, `optional`):
A dictionary of proxy servers to use by protocol or endpoint, e.g.:
:obj:`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.`
The proxies are used on each request.
return_unused_kwargs: (`optional`) bool:
If False, then this function returns just the final configuration object.
If True, then this functions returns a :obj:`Tuple(config, unused_kwargs)` where `unused_kwargs` is a
dictionary consisting of the key/value pairs whose keys are not configuration attributes: ie the part
of kwargs which has not been used to update `config` and is otherwise ignored.
Returns:
:class:`PretrainedConfig`: An instance of a configuration object
Examples::
# We can't instantiate directly the base class `PretrainedConfig` so let's show the examples on a
# derived class: BertConfig
config = BertConfig.from_pretrained('bert-base-uncased') # Download configuration from S3 and cache.
config = BertConfig.from_pretrained('./test/saved_model/') # E.g. config (or model) was saved using `save_pretrained('./test/saved_model/')`
config = BertConfig.from_pretrained('./test/saved_model/my_configuration.json')
config = BertConfig.from_pretrained('bert-base-uncased', output_attention=True, foo=False)
assert config.output_attention == True
config, unused_kwargs = BertConfig.from_pretrained('bert-base-uncased', output_attention=True,
foo=False, return_unused_kwargs=True)
assert config.output_attention == True
assert unused_kwargs == {'foo': False}
"""
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
return cls.from_dict(config_dict, **kwargs)
@classmethod
def get_config_dict(cls, pretrained_model_name_or_path: str, **kwargs) -> Tuple[Dict, Dict]:
"""
From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used
for instantiating a Config using `from_dict`.
Parameters:
pretrained_model_name_or_path (:obj:`string`):
The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.
Returns:
:obj:`Tuple[Dict, Dict]`: The dictionary that will be used to instantiate the configuration object.
"""
cache_dir = kwargs.pop("cache_dir", None)
force_download = kwargs.pop("force_download", False)
resume_download = kwargs.pop("resume_download", False)
proxies = kwargs.pop("proxies", None)
local_files_only = kwargs.pop("local_files_only", False)
if os.path.isdir(pretrained_model_name_or_path):
config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME)
elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
config_file = pretrained_model_name_or_path
else:
config_file = hf_bucket_url(pretrained_model_name_or_path, filename=CONFIG_NAME, use_cdn=False)
try:
# Load from URL or cache if already cached
resolved_config_file = cached_path(
config_file,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
local_files_only=local_files_only,
)
# Load config dict
if resolved_config_file is None:
raise EnvironmentError
config_dict = cls._dict_from_json_file(resolved_config_file)
except EnvironmentError:
msg = (
f"Can't load config for '{pretrained_model_name_or_path}'. Make sure that:\n\n"
f"- '{pretrained_model_name_or_path}' is a correct model identifier listed on 'https://huggingface.co/models'\n\n"
f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
)
raise EnvironmentError(msg)
except json.JSONDecodeError:
msg = (
"Couldn't reach server at '{}' to download configuration file or "
"configuration file is not a valid JSON file. "
"Please check network or file content here: {}.".format(config_file, resolved_config_file)
)
raise EnvironmentError(msg)
if resolved_config_file == config_file:
logger.info("loading configuration file {}".format(config_file))
else:
logger.info("loading configuration file {} from cache at {}".format(config_file, resolved_config_file))
return config_dict, kwargs
@classmethod
def from_dict(cls, config_dict: Dict, **kwargs) -> "PretrainedConfig":
"""
Constructs a `Config` from a Python dictionary of parameters.
Args:
config_dict (:obj:`Dict[str, any]`):
Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved
from a pre-trained checkpoint by leveraging the :func:`~transformers.PretrainedConfig.get_config_dict`
method.
kwargs (:obj:`Dict[str, any]`):
Additional parameters from which to initialize the configuration object.
Returns:
:class:`PretrainedConfig`: An instance of a configuration object
"""
return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
config = cls(**config_dict)
if hasattr(config, "pruned_heads"):
config.pruned_heads = dict((int(key), value) for key, value in config.pruned_heads.items())
# Update config with kwargs if needed
to_remove = []
for key, value in kwargs.items():
if hasattr(config, key):
setattr(config, key, value)
to_remove.append(key)
for key in to_remove:
kwargs.pop(key, None)
logger.info("Model config %s", str(config))
if return_unused_kwargs:
return config, kwargs
else:
return config
@classmethod
def from_json_file(cls, json_file: str) -> "PretrainedConfig":
"""
Constructs a `Config` from the path to a json file of parameters.
Args:
json_file (:obj:`string`):
Path to the JSON file containing the parameters.
Returns:
:class:`PretrainedConfig`: An instance of a configuration object
"""
config_dict = cls._dict_from_json_file(json_file)
return cls(**config_dict)
@classmethod
def _dict_from_json_file(cls, json_file: str):
with open(json_file, "r", encoding="utf-8") as reader:
text = reader.read()
return json.loads(text)
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __repr__(self):
return "{} {}".format(self.__class__.__name__, self.to_json_string())
def to_diff_dict(self):
"""
Removes all attributes from config which correspond to the default
config attributes for better readability and serializes to a Python
dictionary.
Returns:
:obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
"""
config_dict = self.to_dict()
# get the default config dict
default_config_dict = PretrainedConfig().to_dict()
serializable_config_dict = {}
# only serialize values that differ from the default config
for key, value in config_dict.items():
if key not in default_config_dict or value != default_config_dict[key]:
serializable_config_dict[key] = value
return serializable_config_dict
def to_dict(self):
"""
Serializes this instance to a Python dictionary.
Returns:
:obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
"""
output = copy.deepcopy(self.__dict__)
if hasattr(self.__class__, "model_type"):
output["model_type"] = self.__class__.model_type
return output
def to_json_string(self, use_diff=True):
"""
Serializes this instance to a JSON string.
Args:
use_diff (:obj:`bool`):
If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON string.
Returns:
:obj:`string`: String containing all the attributes that make up this configuration instance in JSON format.
"""
if use_diff is True:
config_dict = self.to_diff_dict()
else:
config_dict = self.to_dict()
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
def to_json_file(self, json_file_path, use_diff=True):
"""
Save this instance to a json file.
Args:
json_file_path (:obj:`string`):
Path to the JSON file in which this configuration instance's parameters will be saved.
use_diff (:obj:`bool`):
If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON file.
"""
with open(json_file_path, "w", encoding="utf-8") as writer:
writer.write(self.to_json_string(use_diff=use_diff))
def update(self, config_dict: Dict):
"""
Updates attributes of this class
with attributes from `config_dict`.
Args:
:obj:`Dict[str, any]`: Dictionary of attributes that shall be updated for this class.
"""
for key, value in config_dict.items():
setattr(self, key, value)
================================================
FILE: bert/file_utils.py
================================================
"""
Utilities for working with the local dataset cache.
This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp
Copyright by the AllenNLP authors.
"""
import fnmatch
import json
import logging
import os
import shutil
import sys
import tarfile
import tempfile
from contextlib import contextmanager
from functools import partial, wraps
from hashlib import sha256
from pathlib import Path
from typing import Dict, Optional, Union
from urllib.parse import urlparse
from zipfile import ZipFile, is_zipfile
import requests
from filelock import FileLock
from tqdm.auto import tqdm
#from . import __version__
__version__ = "3.0.2"
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
try:
USE_TF = os.environ.get("USE_TF", "AUTO").upper()
USE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper()
if USE_TORCH in ("1", "ON", "YES", "AUTO") and USE_TF not in ("1", "ON", "YES"):
import torch
_torch_available = True # pylint: disable=invalid-name
logger.info("PyTorch version {} available.".format(torch.__version__))
else:
logger.info("Disabling PyTorch because USE_TF is set")
_torch_available = False
except ImportError:
_torch_available = False # pylint: disable=invalid-name
try:
USE_TF = os.environ.get("USE_TF", "AUTO").upper()
USE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper()
if USE_TF in ("1", "ON", "YES", "AUTO") and USE_TORCH not in ("1", "ON", "YES"):
import tensorflow as tf
assert hasattr(tf, "__version__") and int(tf.__version__[0]) >= 2
_tf_available = True # pylint: disable=invalid-name
logger.info("TensorFlow version {} available.".format(tf.__version__))
else:
logger.info("Disabling Tensorflow because USE_TORCH is set")
_tf_available = False
except (ImportError, AssertionError):
_tf_available = False # pylint: disable=invalid-name
try:
from torch.hub import _get_torch_home
torch_cache_home = _get_torch_home()
except ImportError:
torch_cache_home = os.path.expanduser(
os.getenv("TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "torch"))
)
try:
import torch_xla.core.xla_model as xm # noqa: F401
if _torch_available:
_torch_tpu_available = True # pylint: disable=
else:
_torch_tpu_available = False
except ImportError:
_torch_tpu_available = False
try:
import psutil # noqa: F401
_psutil_available = True
except ImportError:
_psutil_available = False
try:
import py3nvml # noqa: F401
_py3nvml_available = True
except ImportError:
_py3nvml_available = False
try:
from apex import amp # noqa: F401
_has_apex = True
except ImportError:
_has_apex = False
default_cache_path = os.path.join(torch_cache_home, "transformers")
PYTORCH_PRETRAINED_BERT_CACHE = os.getenv("PYTORCH_PRETRAINED_BERT_CACHE", default_cache_path)
PYTORCH_TRANSFORMERS_CACHE = os.getenv("PYTORCH_TRANSFORMERS_CACHE", PYTORCH_PRETRAINED_BERT_CACHE)
TRANSFORMERS_CACHE = os.getenv("TRANSFORMERS_CACHE", PYTORCH_TRANSFORMERS_CACHE)
WEIGHTS_NAME = "pytorch_model.bin"
TF2_WEIGHTS_NAME = "tf_model.h5"
TF_WEIGHTS_NAME = "model.ckpt"
CONFIG_NAME = "config.json"
MODEL_CARD_NAME = "modelcard.json"
MULTIPLE_CHOICE_DUMMY_INPUTS = [[[0], [1]], [[0], [1]]]
DUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]
DUMMY_MASK = [[1, 1, 1, 1, 1], [1, 1, 1, 0, 0], [0, 0, 0, 1, 1]]
S3_BUCKET_PREFIX = "https://s3.amazonaws.com/models.huggingface.co/bert"
CLOUDFRONT_DISTRIB_PREFIX = "https://cdn.huggingface.co"
def is_torch_available():
return _torch_available
def is_tf_available():
return _tf_available
def is_torch_tpu_available():
return _torch_tpu_available
def is_psutil_available():
return _psutil_available
def is_py3nvml_available():
return _py3nvml_available
def is_apex_available():
return _has_apex
def add_start_docstrings(*docstr):
def docstring_decorator(fn):
fn.__doc__ = "".join(docstr) + (fn.__doc__ if fn.__doc__ is not None else "")
return fn
return docstring_decorator
def add_start_docstrings_to_callable(*docstr):
def docstring_decorator(fn):
class_name = ":class:`~transformers.{}`".format(fn.__qualname__.split(".")[0])
intro = " The {} forward method, overrides the :func:`__call__` special method.".format(class_name)
note = r"""
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:`Module` instance afterwards
instead of this since the former takes care of running the
pre and post processing steps while the latter silently ignores them.
"""
fn.__doc__ = intro + note + "".join(docstr) + (fn.__doc__ if fn.__doc__ is not None else "")
return fn
return docstring_decorator
def add_end_docstrings(*docstr):
def docstring_decorator(fn):
fn.__doc__ = fn.__doc__ + "".join(docstr)
return fn
return docstring_decorator
PT_TOKEN_CLASSIFICATION_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0) # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss, scores = outputs[:2]
"""
PT_QUESTION_ANSWERING_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])
>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss, start_scores, end_scores = outputs[:3]
"""
PT_SEQUENCE_CLASSIFICATION_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss, logits = outputs[:2]
"""
PT_MASKED_LM_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"]
>>> outputs = model(input_ids, labels=input_ids)
>>> loss, prediction_scores = outputs[:2]
"""
PT_BASE_MODEL_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
"""
PT_MULTIPLE_CHOICE_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import torch
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True)
>>> outputs = model(**{{k: v.unsqueeze(0) for k,v in encoding.items()}}, labels=labels) # batch size is 1
>>> # the linear classifier still needs to be trained
>>> loss, logits = outputs[:2]
"""
PT_CAUSAL_LM_SAMPLE = r"""
Example::
>>> import torch
>>> from transformers import {tokenizer_class}, {model_class}
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss, logits = outputs[:2]
"""
TF_TOKEN_CLASSIFICATION_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> input_ids = inputs["input_ids"]
>>> inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1
>>> outputs = model(inputs)
>>> loss, scores = outputs[:2]
"""
TF_QUESTION_ANSWERING_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> input_dict = tokenizer(question, text, return_tensors='tf')
>>> start_scores, end_scores = model(input_dict)
>>> all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
>>> answer = ' '.join(all_tokens[tf.math.argmax(start_scores, 1)[0] : tf.math.argmax(end_scores, 1)[0]+1])
"""
TF_SEQUENCE_CLASSIFICATION_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
>>> outputs = model(inputs)
>>> loss, logits = outputs[:2]
"""
TF_MASKED_LM_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] # Batch size 1
>>> outputs = model(input_ids)
>>> prediction_scores = outputs[0]
"""
TF_BASE_MODEL_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
"""
TF_MULTIPLE_CHOICE_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='tf', padding=True)
>>> inputs = {{k: tf.expand_dims(v, 0) for k, v in encoding.items()}}
>>> outputs = model(inputs) # batch size is 1
>>> # the linear classifier still needs to be trained
>>> logits = outputs[0]
"""
TF_CAUSAL_LM_SAMPLE = r"""
Example::
>>> from transformers import {tokenizer_class}, {model_class}
>>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> logits = outputs[0]
"""
def add_code_sample_docstrings(*docstr, tokenizer_class=None, checkpoint=None):
def docstring_decorator(fn):
model_class = fn.__qualname__.split(".")[0]
is_tf_class = model_class[:2] == "TF"
if "SequenceClassification" in model_class:
code_sample = TF_SEQUENCE_CLASSIFICATION_SAMPLE if is_tf_class else PT_SEQUENCE_CLASSIFICATION_SAMPLE
elif "QuestionAnswering" in model_class:
code_sample = TF_QUESTION_ANSWERING_SAMPLE if is_tf_class else PT_QUESTION_ANSWERING_SAMPLE
elif "TokenClassification" in model_class:
code_sample = TF_TOKEN_CLASSIFICATION_SAMPLE if is_tf_class else PT_TOKEN_CLASSIFICATION_SAMPLE
elif "MultipleChoice" in model_class:
code_sample = TF_MULTIPLE_CHOICE_SAMPLE if is_tf_class else PT_MULTIPLE_CHOICE_SAMPLE
elif "MaskedLM" in model_class:
code_sample = TF_MASKED_LM_SAMPLE if is_tf_class else PT_MASKED_LM_SAMPLE
elif "LMHead" in model_class:
code_sample = TF_CAUSAL_LM_SAMPLE if is_tf_class else PT_CAUSAL_LM_SAMPLE
elif "Model" in model_class:
code_sample = TF_BASE_MODEL_SAMPLE if is_tf_class else PT_BASE_MODEL_SAMPLE
else:
raise ValueError(f"Docstring can't be built for model {model_class}")
built_doc = code_sample.format(model_class=model_class, tokenizer_class=tokenizer_class, checkpoint=checkpoint)
fn.__doc__ = (fn.__doc__ or "") + "".join(docstr) + built_doc
return fn
return docstring_decorator
def is_remote_url(url_or_filename):
parsed = urlparse(url_or_filename)
return parsed.scheme in ("http", "https")
def hf_bucket_url(model_id: str, filename: str, use_cdn=True) -> str:
"""
Resolve a model identifier, and a file name, to a HF-hosted url
on either S3 or Cloudfront (a Content Delivery Network, or CDN).
Cloudfront is replicated over the globe so downloads are way faster
for the end user (and it also lowers our bandwidth costs). However, it
is more aggressively cached by default, so may not always reflect the
latest changes to the underlying file (default TTL is 24 hours).
In terms of client-side caching from this library, even though
Cloudfront relays the ETags from S3, using one or the other
(or switching from one to the other) will affect caching: cached files
are not shared between the two because the cached file's name contains
a hash of the url.
"""
endpoint = CLOUDFRONT_DISTRIB_PREFIX if use_cdn else S3_BUCKET_PREFIX
legacy_format = "/" not in model_id
if legacy_format:
return f"{endpoint}/{model_id}-{filename}"
else:
return f"{endpoint}/{model_id}/{filename}"
def url_to_filename(url, etag=None):
"""
Convert `url` into a hashed filename in a repeatable way.
If `etag` is specified, append its hash to the url's, delimited
by a period.
If the url ends with .h5 (Keras HDF5 weights) adds '.h5' to the name
so that TF 2.0 can identify it as a HDF5 file
(see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1380)
"""
url_bytes = url.encode("utf-8")
url_hash = sha256(url_bytes)
filename = url_hash.hexdigest()
if etag:
etag_bytes = etag.encode("utf-8")
etag_hash = sha256(etag_bytes)
filename += "." + etag_hash.hexdigest()
if url.endswith(".h5"):
filename += ".h5"
return filename
def filename_to_url(filename, cache_dir=None):
"""
Return the url and etag (which may be ``None``) stored for `filename`.
Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist.
"""
if cache_dir is None:
cache_dir = TRANSFORMERS_CACHE
if isinstance(cache_dir, Path):
cache_dir = str(cache_dir)
cache_path = os.path.join(cache_dir, filename)
if not os.path.exists(cache_path):
raise EnvironmentError("file {} not found".format(cache_path))
meta_path = cache_path + ".json"
if not os.path.exists(meta_path):
raise EnvironmentError("file {} not found".format(meta_path))
with open(meta_path, encoding="utf-8") as meta_file:
metadata = json.load(meta_file)
url = metadata["url"]
etag = metadata["etag"]
return url, etag
def cached_path(
url_or_filename,
cache_dir=None,
force_download=False,
proxies=None,
resume_download=False,
user_agent: Union[Dict, str, None] = None,
extract_compressed_file=False,
force_extract=False,
local_files_only=False,
) -> Optional[str]:
"""
Given something that might be a URL (or might be a local path),
determine which. If it's a URL, download the file and cache it, and
return the path to the cached file. If it's already a local path,
make sure the file exists and then return the path.
Args:
cache_dir: specify a cache directory to save the file to (overwrite the default cache dir).
force_download: if True, re-dowload the file even if it's already cached in the cache dir.
resume_download: if True, resume the download if incompletly recieved file is found.
user_agent: Optional string or dict that will be appended to the user-agent on remote requests.
extract_compressed_file: if True and the path point to a zip or tar file, extract the compressed
file in a folder along the archive.
force_extract: if True when extract_compressed_file is True and the archive was already extracted,
re-extract the archive and overide the folder where it was extracted.
Return:
None in case of non-recoverable file (non-existent or inaccessible url + no cache on disk).
Local path (string) otherwise
"""
if cache_dir is None:
cache_dir = TRANSFORMERS_CACHE
if isinstance(url_or_filename, Path):
url_or_filename = str(url_or_filename)
if isinstance(cache_dir, Path):
cache_dir = str(cache_dir)
if is_remote_url(url_or_filename):
# URL, so get it from the cache (downloading if necessary)
output_path = get_from_cache(
url_or_filename,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
user_agent=user_agent,
local_files_only=local_files_only,
)
elif os.path.exists(url_or_filename):
# File, and it exists.
output_path = url_or_filename
elif urlparse(url_or_filename).scheme == "":
# File, but it doesn't exist.
raise EnvironmentError("file {} not found".format(url_or_filename))
else:
# Something unknown
raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename))
if extract_compressed_file:
if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
return output_path
# Path where we extract compressed archives
# We avoid '.' in dir name and add "-extracted" at the end: "./model.zip" => "./model-zip-extracted/"
output_dir, output_file = os.path.split(output_path)
output_extract_dir_name = output_file.replace(".", "-") + "-extracted"
output_path_extracted = os.path.join(output_dir, output_extract_dir_name)
if os.path.isdir(output_path_extracted) and os.listdir(output_path_extracted) and not force_extract:
return output_path_extracted
# Prevent parallel extractions
lock_path = output_path + ".lock"
with FileLock(lock_path):
shutil.rmtree(output_path_extracted, ignore_errors=True)
os.makedirs(output_path_extracted)
if is_zipfile(output_path):
with ZipFile(output_path, "r") as zip_file:
zip_file.extractall(output_path_extracted)
zip_file.close()
elif tarfile.is_tarfile(output_path):
tar_file = tarfile.open(output_path)
tar_file.extractall(output_path_extracted)
tar_file.close()
else:
raise EnvironmentError("Archive format of {} could not be identified".format(output_path))
return output_path_extracted
return output_path
def http_get(url, temp_file, proxies=None, resume_size=0, user_agent: Union[Dict, str, None] = None):
ua = "transformers/{}; python/{}".format(__version__, sys.version.split()[0])
if is_torch_available():
ua += "; torch/{}".format(torch.__version__)
if is_tf_available():
ua += "; tensorflow/{}".format(tf.__version__)
if isinstance(user_agent, dict):
ua += "; " + "; ".join("{}/{}".format(k, v) for k, v in user_agent.items())
elif isinstance(user_agent, str):
ua += "; " + user_agent
headers = {"user-agent": ua}
if resume_size > 0:
headers["Range"] = "bytes=%d-" % (resume_size,)
response = requests.get(url, stream=True, proxies=proxies, headers=headers)
if response.status_code == 416: # Range not satisfiable
return
content_length = response.headers.get("Content-Length")
total = resume_size + int(content_length) if content_length is not None else None
progress = tqdm(
unit="B",
unit_scale=True,
total=total,
initial=resume_size,
desc="Downloading",
disable=bool(logger.getEffectiveLevel() == logging.NOTSET),
)
for chunk in response.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
progress.update(len(chunk))
temp_file.write(chunk)
progress.close()
def get_from_cache(
url,
cache_dir=None,
force_download=False,
proxies=None,
etag_timeout=10,
resume_download=False,
user_agent: Union[Dict, str, None] = None,
local_files_only=False,
) -> Optional[str]:
"""
Given a URL, look for the corresponding file in the local cache.
If it's not there, download it. Then return the path to the cached file.
Return:
None in case of non-recoverable file (non-existent or inaccessible url + no cache on disk).
Local path (string) otherwise
"""
if cache_dir is None:
cache_dir = TRANSFORMERS_CACHE
if isinstance(cache_dir, Path):
cache_dir = str(cache_dir)
os.makedirs(cache_dir, exist_ok=True)
etag = None
if not local_files_only:
try:
response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
if response.status_code == 200:
etag = response.headers.get("ETag")
except (EnvironmentError, requests.exceptions.Timeout):
# etag is already None
pass
filename = url_to_filename(url, etag)
# get cache path to put the file
cache_path = os.path.join(cache_dir, filename)
# etag is None = we don't have a connection, or url doesn't exist, or is otherwise inaccessible.
# try to get the last downloaded one
if etag is None:
if os.path.exists(cache_path):
return cache_path
else:
matching_files = [
file
for file in fnmatch.filter(os.listdir(cache_dir), filename + ".*")
if not file.endswith(".json") and not file.endswith(".lock")
]
if len(matching_files) > 0:
return os.path.join(cache_dir, matching_files[-1])
else:
# If files cannot be found and local_files_only=True,
# the models might've been found if local_files_only=False
# Notify the user about that
if local_files_only:
raise ValueError(
"Cannot find the requested files in the cached path and outgoing traffic has been"
" disabled. To enable model look-ups and downloads online, set 'local_files_only'"
" to False."
)
return None
# From now on, etag is not None.
if os.path.exists(cache_path) and not force_download:
return cache_path
# Prevent parallel downloads of the same file with a lock.
lock_path = cache_path + ".lock"
with FileLock(lock_path):
# If the download just completed while the lock was activated.
if os.path.exists(cache_path) and not force_download:
# Even if returning early like here, the lock will be released.
return cache_path
if resume_download:
incomplete_path = cache_path + ".incomplete"
@contextmanager
def _resumable_file_manager():
with open(incomplete_path, "a+b") as f:
yield f
temp_file_manager = _resumable_file_manager
if os.path.exists(incomplete_path):
resume_size = os.stat(incomplete_path).st_size
else:
resume_size = 0
else:
temp_file_manager = partial(tempfile.NamedTemporaryFile, dir=cache_dir, delete=False)
resume_size = 0
# Download to temporary file, then copy to cache dir once finished.
# Otherwise you get corrupt cache entries if the download gets interrupted.
with temp_file_manager() as temp_file:
logger.info("%s not found in cache or force_download set to True, downloading to %s", url, temp_file.name)
http_get(url, temp_file, proxies=proxies, resume_size=resume_size, user_agent=user_agent)
logger.info("storing %s in cache at %s", url, cache_path)
os.replace(temp_file.name, cache_path)
logger.info("creating metadata file for %s", cache_path)
meta = {"url": url, "etag": etag}
meta_path = cache_path + ".json"
with open(meta_path, "w") as meta_file:
json.dump(meta, meta_file)
return cache_path
class cached_property(property):
"""
Descriptor that mimics @property but caches output in member variable.
From tensorflow_datasets
Built-in in functools from Python 3.8.
"""
def __get__(self, obj, objtype=None):
# See docs.python.org/3/howto/descriptor.html#properties
if obj is None:
return self
if self.fget is None:
raise AttributeError("unreadable attribute")
attr = "__cached_" + self.fget.__name__
cached = getattr(obj, attr, None)
if cached is None:
cached = self.fget(obj)
setattr(obj, attr, cached)
return cached
def torch_required(func):
# Chose a different decorator name than in tests so it's clear they are not the same.
@wraps(func)
def wrapper(*args, **kwargs):
if is_torch_available():
return func(*args, **kwargs)
else:
raise ImportError(f"Method `{func.__name__}` requires PyTorch.")
return wrapper
def tf_required(func):
# Chose a different decorator name than in tests so it's clear they are not the same.
@wraps(func)
def wrapper(*args, **kwargs):
if is_tf_available():
return func(*args, **kwargs)
else:
raise ImportError(f"Method `{func.__name__}` requires TF.")
return wrapper
================================================
FILE: bert/generation_utils.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors, Facebook AI Research authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Iterable, Optional, Tuple
import torch
from torch import Tensor
from torch.nn import functional as F
logger = logging.getLogger(__name__)
class GenerationMixin:
"""
A class contraining all of the functions supporting generation, to be used as a mixin in PreTrainedModel.
"""
def prepare_inputs_for_generation(self, input_ids, **kwargs):
return {"input_ids": input_ids}
def adjust_logits_during_generation(self, logits, **kwargs):
return logits
def _use_cache(self, outputs, use_cache):
"""During generation, decide whether to pass the `past` variable to the next forward pass."""
if len(outputs) <= 1 or use_cache is False:
return False
if hasattr(self.config, "mem_len") and self.config.mem_len == 0:
return False
return True
def enforce_repetition_penalty_(self, lprobs, batch_size, num_beams, prev_output_tokens, repetition_penalty):
"""repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858). """
for i in range(batch_size * num_beams):
for previous_token in set(prev_output_tokens[i].tolist()):
# if score < 0 then repetition penalty has to multiplied to reduce the previous token probability
if lprobs[i, previous_token] < 0:
lprobs[i, previous_token] *= repetition_penalty
else:
lprobs[i, previous_token] /= repetition_penalty
def postprocess_next_token_scores(
self,
scores,
input_ids,
no_repeat_ngram_size,
bad_words_ids,
cur_len,
min_length,
max_length,
eos_token_id,
repetition_penalty,
batch_size,
num_beams,
):
# repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
if repetition_penalty != 1.0:
self.enforce_repetition_penalty_(
scores, batch_size, num_beams, input_ids, repetition_penalty,
)
# set eos token prob to zero if min_length is not reached
if eos_token_id is not None and cur_len < min_length:
scores[:, eos_token_id] = -float("inf")
if no_repeat_ngram_size > 0:
# calculate a list of banned tokens to prevent repetitively generating the same ngrams
num_batch_hypotheses = batch_size * num_beams
# from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
banned_batch_tokens = calc_banned_ngram_tokens(
input_ids, num_batch_hypotheses, no_repeat_ngram_size, cur_len
)
for i, banned_tokens in enumerate(banned_batch_tokens):
scores[i, banned_tokens] = -float("inf")
if bad_words_ids is not None:
# calculate a list of banned tokens according to bad words
banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)
for i, banned_tokens in enumerate(banned_tokens):
scores[i, banned_tokens] = -float("inf")
return scores
@torch.no_grad()
def generate(
self,
input_ids: Optional[torch.LongTensor] = None,
max_length: Optional[int] = None,
min_length: Optional[int] = None,
do_sample: Optional[bool] = None,
early_stopping: Optional[bool] = None,
num_beams: Optional[int] = None,
temperature: Optional[float] = None,
top_k: Optional[int] = None,
top_p: Optional[float] = None,
repetition_penalty: Optional[float] = None,
bad_words_ids: Optional[Iterable[int]] = None,
bos_token_id: Optional[int] = None,
pad_token_id: Optional[int] = None,
eos_token_id: Optional[int] = None,
length_penalty: Optional[float] = None,
no_repeat_ngram_size: Optional[int] = None,
num_return_sequences: Optional[int] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_start_token_id: Optional[int] = None,
use_cache: Optional[bool] = None,
**model_specific_kwargs
) -> torch.LongTensor:
r""" Generates sequences for models with a LM head. The method currently supports greedy decoding, beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling.
Adapted in part from `Facebook's XLM beam search code`_.
.. _`Facebook's XLM beam search code`:
https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529
Parameters:
input_ids: (`optional`) `torch.LongTensor` of shape `(batch_size, sequence_length)`
The sequence used as a prompt for the generation. If `None` the method initializes
it as an empty `torch.LongTensor` of shape `(1,)`.
max_length: (`optional`) int
The max length of the sequence to be generated. Between `min_length` and infinity. Default to 20.
min_length: (`optional`) int
The min length of the sequence to be generated. Between 0 and infinity. Default to 0.
do_sample: (`optional`) bool
If set to `False` greedy decoding is used. Otherwise sampling is used. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
early_stopping: (`optional`) bool
if set to `True` beam search is stopped when at least `num_beams` sentences finished per batch. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
num_beams: (`optional`) int
Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1.
temperature: (`optional`) float
The value used to module the next token probabilities. Must be strictly positive. Default to 1.0.
top_k: (`optional`) int
The number of highest probability vocabulary tokens to keep for top-k-filtering. Between 1 and infinity. Default to 50.
top_p: (`optional`) float
The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Must be between 0 and 1. Default to 1.
repetition_penalty: (`optional`) float
The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0.
pad_token_id: (`optional`) int
Padding token. Default to specicic model pad_token_id or None if it does not exist.
bos_token_id: (`optional`) int
BOS token. Defaults to `bos_token_id` as defined in the models config.
eos_token_id: (`optional`) int
EOS token. Defaults to `eos_token_id` as defined in the models config.
length_penalty: (`optional`) float
Exponential penalty to the length. Default to 1.
no_repeat_ngram_size: (`optional`) int
If set to int > 0, all ngrams of size `no_repeat_ngram_size` can only occur once.
bad_words_ids: (`optional`) list of lists of int
`bad_words_ids` contains tokens that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use `tokenizer.encode(bad_word, add_prefix_space=True)`.
num_return_sequences: (`optional`) int
The number of independently computed returned sequences for each element in the batch. Default to 1.
attention_mask (`optional`) obj: `torch.LongTensor` of same shape as `input_ids`
Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
Defaults to `None`.
`What are attention masks? <../glossary.html#attention-mask>`__
decoder_start_token_id=None: (`optional`) int
If an encoder-decoder model starts decoding with a different token than BOS.
Defaults to `None` and is changed to `BOS` later.
use_cache: (`optional`) bool
If `use_cache` is True, past key values are used to speed up decoding if applicable to model. Defaults to `True`.
model_specific_kwargs: (`optional`) dict
Additional model specific kwargs will be forwarded to the `forward` function of the model.
Return:
output: `torch.LongTensor` of shape `(batch_size * num_return_sequences, sequence_length)`
sequence_length is either equal to max_length or shorter if all batches finished early due to the `eos_token_id`
Examples::
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
outputs = model.generate(max_length=40) # do greedy decoding
print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from S3 and cache.
input_context = 'The dog'
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog'
for i in range(3): # 3 output sequences were generated
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
input_context = 'The dog'
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3) # 3 generate sequences using by sampling
for i in range(3): # 3 output sequences were generated
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from S3 and cache.
input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences
print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from S3 and cache.
input_context = 'My cute dog' # "Legal" is one of the control codes for ctrl
bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']]
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated
"""
# We cannot generate if the model does not have a LM head
if self.get_output_embeddings() is None:
raise AttributeError(
"You tried to generate sequences with a model that does not have a LM Head."
"Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )"
)
max_length = max_length if max_length is not None else self.config.max_length
min_length = min_length if min_length is not None else self.config.min_length
do_sample = do_sample if do_sample is not None else self.config.do_sample
early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping
use_cache = use_cache if use_cache is not None else self.config.use_cache
num_beams = num_beams if num_beams is not None else self.config.num_beams
temperature = temperature if temperature is not None else self.config.temperature
top_k = top_k if top_k is not None else self.config.top_k
top_p = top_p if top_p is not None else self.config.top_p
repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty
bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id
pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty
no_repeat_ngram_size = (
no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size
)
bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids
num_return_sequences = (
num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences
)
decoder_start_token_id = (
decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id
)
if input_ids is not None:
batch_size = input_ids.shape[0] # overriden by the input batch_size
else:
batch_size = 1
assert isinstance(max_length, int) and max_length > 0, "`max_length` should be a strictly positive integer."
assert isinstance(min_length, int) and min_length >= 0, "`min_length` should be a positive integer."
assert isinstance(do_sample, bool), "`do_sample` should be a boolean."
assert isinstance(early_stopping, bool), "`early_stopping` should be a boolean."
assert isinstance(use_cache, bool), "`use_cache` should be a boolean."
assert isinstance(num_beams, int) and num_beams > 0, "`num_beams` should be a strictly positive integer."
assert temperature > 0, "`temperature` should be strictly positive."
assert isinstance(top_k, int) and top_k >= 0, "`top_k` should be a positive integer."
assert 0 <= top_p <= 1, "`top_p` should be between 0 and 1."
assert repetition_penalty >= 1.0, "`repetition_penalty` should be >= 1."
assert input_ids is not None or (
isinstance(bos_token_id, int) and bos_token_id >= 0
), "If input_ids is not defined, `bos_token_id` should be a positive integer."
assert pad_token_id is None or (
isinstance(pad_token_id, int) and (pad_token_id >= 0)
), "`pad_token_id` should be a positive integer."
assert (eos_token_id is None) or (
isinstance(eos_token_id, int) and (eos_token_id >= 0)
), "`eos_token_id` should be a positive integer."
assert length_penalty > 0, "`length_penalty` should be strictly positive."
assert (
isinstance(no_repeat_ngram_size, int) and no_repeat_ngram_size >= 0
), "`no_repeat_ngram_size` should be a positive integer."
assert (
isinstance(num_return_sequences, int) and num_return_sequences > 0
), "`num_return_sequences` should be a strictly positive integer."
assert (
bad_words_ids is None or isinstance(bad_words_ids, list) and isinstance(bad_words_ids[0], list)
), "`bad_words_ids` is either `None` or a list of lists of tokens that should not be generated"
if input_ids is None:
assert isinstance(bos_token_id, int) and bos_token_id >= 0, (
"you should either supply a context to complete as `input_ids` input "
"or a `bos_token_id` (integer >= 0) as a first token to start the generation."
)
input_ids = torch.full(
(batch_size, 1), bos_token_id, dtype=torch.long, device=next(self.parameters()).device,
)
else:
assert input_ids.dim() == 2, "Input prompt should be of shape (batch_size, sequence length)."
# not allow to duplicate outputs when greedy decoding
if do_sample is False:
if num_beams == 1:
# no_beam_search greedy generation conditions
assert (
num_return_sequences == 1
), "Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1"
else:
# beam_search greedy generation conditions
assert (
num_beams >= num_return_sequences
), "Greedy beam search decoding cannot return more sequences than it has beams. Please set num_beams >= num_return_sequences"
# create attention mask if necessary
# TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140
if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids):
attention_mask = input_ids.ne(pad_token_id).long()
elif attention_mask is None:
attention_mask = input_ids.new_ones(input_ids.shape)
# set pad_token_id to eos_token_id if not set. Important that this is done after
# attention_mask is created
if pad_token_id is None and eos_token_id is not None:
logger.warning(
"Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id)
)
pad_token_id = eos_token_id
# current position and vocab size
if hasattr(self.config, "vocab_size"):
vocab_size = self.config.vocab_size
elif (
self.config.is_encoder_decoder
and hasattr(self.config, "decoder")
and hasattr(self.config.decoder, "vocab_size")
):
vocab_size = self.config.decoder.vocab_size
# set effective batch size and effective batch multiplier according to do_sample
if do_sample:
effective_batch_size = batch_size * num_return_sequences
effective_batch_mult = num_return_sequences
else:
effective_batch_size = batch_size
effective_batch_mult = 1
if self.config.is_encoder_decoder:
if decoder_start_token_id is None:
decoder_start_token_id = bos_token_id
assert (
decoder_start_token_id is not None
), "decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation"
assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self)
assert callable(self.get_encoder), "{} should be a method".format(self.get_encoder)
# get encoder and store encoder outputs
encoder = self.get_encoder()
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
# Expand input ids if num_beams > 1 or num_return_sequences > 1
if num_return_sequences > 1 or num_beams > 1:
input_ids_len = input_ids.shape[-1]
input_ids = input_ids.unsqueeze(1).expand(batch_size, effective_batch_mult * num_beams, input_ids_len)
attention_mask = attention_mask.unsqueeze(1).expand(
batch_size, effective_batch_mult * num_beams, input_ids_len
)
input_ids = input_ids.contiguous().view(
effective_batch_size * num_beams, input_ids_len
) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
attention_mask = attention_mask.contiguous().view(
effective_batch_size * num_beams, input_ids_len
) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
if self.config.is_encoder_decoder:
# create empty decoder_input_ids
input_ids = torch.full(
(effective_batch_size * num_beams, 1),
decoder_start_token_id,
dtype=torch.long,
device=next(self.parameters()).device,
)
cur_len = 1
assert (
batch_size == encoder_outputs[0].shape[0]
), f"expected encoder_outputs[0] to have 1st dimension bs={batch_size}, got {encoder_outputs[0].shape[0]} "
# expand batch_idx to assign correct encoder output for expanded input_ids (due to num_beams > 1 and num_return_sequences > 1)
expanded_batch_idxs = (
torch.arange(batch_size)
.view(-1, 1)
.repeat(1, num_beams * effective_batch_mult)
.view(-1)
.to(input_ids.device)
)
# expand encoder_outputs
encoder_outputs = (encoder_outputs[0].index_select(0, expanded_batch_idxs), *encoder_outputs[1:])
else:
encoder_outputs = None
cur_len = input_ids.shape[-1]
assert (
cur_len < max_length
), f"The context has {cur_len} number of tokens, but `max_length` is only {max_length}. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`"
if num_beams > 1:
output = self._generate_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
early_stopping=early_stopping,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
num_return_sequences=num_return_sequences,
length_penalty=length_penalty,
num_beams=num_beams,
vocab_size=vocab_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
model_specific_kwargs=model_specific_kwargs,
)
else:
output = self._generate_no_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
model_specific_kwargs=model_specific_kwargs,
)
return output
def _generate_no_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
encoder_outputs,
attention_mask,
use_cache,
model_specific_kwargs,
):
""" Generate sequences for each example without beam search (num_beams == 1).
All returned sequence are generated independantly.
"""
# length of generated sentences / unfinished sentences
unfinished_sents = input_ids.new(batch_size).fill_(1)
sent_lengths = input_ids.new(batch_size).fill_(max_length)
past = (encoder_outputs, None) if encoder_outputs is not None else None
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
outputs = self(**model_inputs)
next_token_logits = outputs[0][:, -1, :]
scores = self.postprocess_next_token_scores(
scores=next_token_logits,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=1,
)
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
scores = scores / temperature
# Top-p/top-k filtering
next_token_logscores = top_k_top_p_filtering(scores, top_k=top_k, top_p=top_p)
# Sample
probs = F.softmax(next_token_logscores, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
return input_ids
def _generate_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
early_stopping,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
num_return_sequences,
length_penalty,
num_beams,
vocab_size,
encoder_outputs,
attention_mask,
use_cache,
model_specific_kwargs,
):
""" Generate sequences for each example with beam search.
"""
# generated hypotheses
generated_hyps = [
BeamHypotheses(num_beams, max_length, length_penalty, early_stopping=early_stopping)
for _ in range(batch_size)
]
# scores for each sentence in the beam
beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
# for greedy decoding it is made sure that only tokens of the first beam are considered to avoid sampling the exact same tokens three times
if do_sample is False:
beam_scores[:, 1:] = -1e9
beam_scores = beam_scores.view(-1) # shape (batch_size * num_beams,)
# cache compute states
past = (encoder_outputs, None) if encoder_outputs is not None else None
# done sentences
done = [False for _ in range(batch_size)]
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)
next_token_logits = outputs[0][:, -1, :] # (batch_size * num_beams, vocab_size)
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
if self.config.is_encoder_decoder and do_sample is False:
# TODO (PVP) still a bit hacky here - there might be a better solution
next_token_logits = self.adjust_logits_during_generation(
next_token_logits, cur_len=cur_len, max_length=max_length
)
scores = F.log_softmax(next_token_logits, dim=-1) # (batch_size * num_beams, vocab_size)
scores = self.postprocess_next_token_scores(
scores=scores,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=num_beams,
)
assert scores.shape == (batch_size * num_beams, vocab_size), "Shapes of scores: {} != {}".format(
scores.shape, (batch_size * num_beams, vocab_size)
)
if do_sample:
_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size)
# Temperature
if temperature != 1.0:
_scores = _scores / temperature
# Top-p/top-k filtering
_scores = top_k_top_p_filtering(
_scores, top_k=top_k, top_p=top_p, min_tokens_to_keep=2
) # (batch_size * num_beams, vocab_size)
# re-organize to group the beam together to sample from all beam_idxs
_scores = _scores.contiguous().view(
batch_size, num_beams * vocab_size
) # (batch_size, num_beams * vocab_size)
# Sample 2 next tokens for each beam (so we have some spare tokens and match output of greedy beam search)
probs = F.softmax(_scores, dim=-1)
next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) # (batch_size, num_beams * 2)
# Compute next scores
next_scores = torch.gather(_scores, -1, next_tokens) # (batch_size, num_beams * 2)
# sort the sampled vector to make sure that the first num_beams samples are the best
next_scores, next_scores_indices = torch.sort(next_scores, descending=True, dim=1)
next_tokens = torch.gather(next_tokens, -1, next_scores_indices) # (batch_size, num_beams * 2)
else:
next_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size)
# re-organize to group the beam together (we are keeping top hypothesis accross beams)
next_scores = next_scores.view(
batch_size, num_beams * vocab_size
) # (batch_size, num_beams * vocab_size)
next_scores, next_tokens = torch.topk(next_scores, 2 * num_beams, dim=1, largest=True, sorted=True)
assert next_scores.size() == next_tokens.size() == (batch_size, 2 * num_beams)
# next batch beam content
next_batch_beam = []
# for each sentence
for batch_idx in range(batch_size):
# if we are done with this sentence, add a pad token
if done[batch_idx]:
assert (
len(generated_hyps[batch_idx]) >= num_beams
), "Batch can only be done if at least {} beams have been generated".format(num_beams)
assert (
eos_token_id is not None and pad_token_id is not None
), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined"
next_batch_beam.extend([(0, pad_token_id, 0)] * num_beams) # pad the batch
continue
# next sentence beam content, this will get added to next_batch_beam
next_sent_beam = []
# next tokens for this sentence
for beam_token_rank, (beam_token_id, beam_token_score) in enumerate(
zip(next_tokens[batch_idx], next_scores[batch_idx])
):
# get beam and token IDs
beam_id = beam_token_id // vocab_size
token_id = beam_token_id % vocab_size
effective_beam_id = batch_idx * num_beams + beam_id
# add to generated hypotheses if end of sentence
if (eos_token_id is not None) and (token_id.item() == eos_token_id):
# if beam_token does not belong to top num_beams tokens, it should not be added
is_beam_token_worse_than_top_num_beams = beam_token_rank >= num_beams
if is_beam_token_worse_than_top_num_beams:
continue
generated_hyps[batch_idx].add(
input_ids[effective_beam_id].clone(), beam_token_score.item(),
)
else:
# add next predicted token since it is not eos_token
next_sent_beam.append((beam_token_score, token_id, effective_beam_id))
# once the beam for next step is full, don't add more tokens to it.
if len(next_sent_beam) == num_beams:
break
# Check if we are done so that we can save a pad step if all(done)
done[batch_idx] = done[batch_idx] or generated_hyps[batch_idx].is_done(
next_scores[batch_idx].max().item(), cur_len
)
# update next beam content
assert len(next_sent_beam) == num_beams, "Beam should always be full"
next_batch_beam.extend(next_sent_beam)
assert len(next_batch_beam) == num_beams * (batch_idx + 1), "We should have added num_beams each step"
# stop when we are done with each sentence
if all(done):
break
# sanity check / prepare next batch
assert len(next_batch_beam) == batch_size * num_beams
beam_scores = beam_scores.new([x[0] for x in next_batch_beam])
beam_tokens = input_ids.new([x[1] for x in next_batch_beam])
beam_idx = input_ids.new([x[2] for x in next_batch_beam])
# re-order batch and update current length
input_ids = input_ids[beam_idx, :]
input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1)
cur_len = cur_len + 1
# re-order internal states
if past is not None:
past = self._reorder_cache(past, beam_idx)
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
# finalize all open beam hypotheses and add to generated hypotheses
for batch_idx in range(batch_size):
if done[batch_idx]:
continue
# test that beam scores match previously calculated scores if not eos and batch_idx not done
if eos_token_id is not None and all(
(token_id % vocab_size).item() != eos_token_id for token_id in next_tokens[batch_idx]
):
assert torch.all(
next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx]
), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format(
next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx],
)
# need to add best num_beams hypotheses to generated hyps
for beam_id in range(num_beams):
effective_beam_id = batch_idx * num_beams + beam_id
final_score = beam_scores[effective_beam_id].item()
final_tokens = input_ids[effective_beam_id]
generated_hyps[batch_idx].add(final_tokens, final_score)
# depending on whether greedy generation is wanted or not define different output_batch_size and output_num_return_sequences_per_batch
output_batch_size = batch_size if do_sample else batch_size * num_return_sequences
output_num_return_sequences_per_batch = 1 if do_sample else num_return_sequences
# select the best hypotheses
sent_lengths = input_ids.new(output_batch_size)
best = []
# retrieve best hypotheses
for i, hypotheses in enumerate(generated_hyps):
sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0])
for j in range(output_num_return_sequences_per_batch):
effective_batch_idx = output_num_return_sequences_per_batch * i + j
best_hyp = sorted_hyps.pop()[1]
sent_lengths[effective_batch_idx] = len(best_hyp)
best.append(best_hyp)
# shorter batches are padded
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined"
sent_max_len = min(sent_lengths.max().item() + 1, max_length)
decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id)
# fill with hypothesis and eos_token_id if necessary
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length:
decoded[i, sent_lengths[i]] = eos_token_id
else:
# none of the hypotheses have an eos_token
assert (len(hypo) == max_length for hypo in best)
decoded = torch.stack(best).type(torch.long).to(next(self.parameters()).device)
return decoded
@staticmethod
def _reorder_cache(past: Tuple, beam_idx: Tensor) -> Tuple[Tensor]:
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
def calc_banned_ngram_tokens(prev_input_ids: Tensor, num_hypos: int, no_repeat_ngram_size: int, cur_len: int) -> None:
"""Copied from fairseq for no_repeat_ngram in beam_search"""
if cur_len + 1 < no_repeat_ngram_size:
# return no banned tokens if we haven't generated no_repeat_ngram_size tokens yet
return [[] for _ in range(num_hypos)]
generated_ngrams = [{} for _ in range(num_hypos)]
for idx in range(num_hypos):
gen_tokens = prev_input_ids[idx].tolist()
generated_ngram = generated_ngrams[idx]
for ngram in zip(*[gen_tokens[i:] for i in range(no_repeat_ngram_size)]):
prev_ngram_tuple = tuple(ngram[:-1])
generated_ngram[prev_ngram_tuple] = generated_ngram.get(prev_ngram_tuple, []) + [ngram[-1]]
def _get_generated_ngrams(hypo_idx):
# Before decoding the next token, prevent decoding of ngrams that have already appeared
start_idx = cur_len + 1 - no_repeat_ngram_size
ngram_idx = tuple(prev_input_ids[hypo_idx, start_idx:cur_len].tolist())
return generated_ngrams[hypo_idx].get(ngram_idx, [])
banned_tokens = [_get_generated_ngrams(hypo_idx) for hypo_idx in range(num_hypos)]
return banned_tokens
def calc_banned_bad_words_ids(prev_input_ids: Iterable[int], bad_words_ids: Iterable[int]) -> Iterable[int]:
banned_tokens = []
def _tokens_match(prev_tokens, tokens):
if len(tokens) == 0:
# if bad word tokens is just one token always ban it
return True
if len(tokens) > len(prev_input_ids):
# if bad word tokens are longer then prev input_ids they can't be equal
return False
if prev_tokens[-len(tokens) :] == tokens:
# if tokens match
return True
else:
return False
for prev_input_ids_slice in prev_input_ids:
banned_tokens_slice = []
for banned_token_seq in bad_words_ids:
assert len(banned_token_seq) > 0, "Banned words token sequences {} cannot have an empty list".format(
bad_words_ids
)
if _tokens_match(prev_input_ids_slice.tolist(), banned_token_seq[:-1]) is False:
# if tokens do not match continue
continue
banned_tokens_slice.append(banned_token_seq[-1])
banned_tokens.append(banned_tokens_slice)
return banned_tokens
def top_k_top_p_filtering(
logits: Tensor,
top_k: int = 0,
top_p: float = 1.0,
filter_value: float = -float("Inf"),
min_tokens_to_keep: int = 1,
) -> Tensor:
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (batch size, vocabulary size)
if top_k > 0: keep only top k tokens with highest probability (top-k filtering).
if top_p < 1.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
Make sure we keep at least min_tokens_to_keep per batch example in the output
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
"""
if top_k > 0:
top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # Safety check
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p < 1.0:
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold (token with 0 are kept)
sorted_indices_to_remove = cumulative_probs > top_p
if min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., :min_tokens_to_keep] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
logits[indices_to_remove] = filter_value
return logits
class BeamHypotheses(object):
def __init__(self, num_beams, max_length, length_penalty, early_stopping):
"""
Initialize n-best list of hypotheses.
"""
self.max_length = max_length - 1 # ignoring bos_token
self.length_penalty = length_penalty
self.early_stopping = early_stopping
self.num_beams = num_beams
self.beams = []
self.worst_score = 1e9
def __len__(self):
"""
Number of hypotheses in the list.
"""
return len(self.beams)
def add(self, hyp, sum_logprobs):
"""
Add a new hypothesis to the list.
"""
score = sum_logprobs / len(hyp) ** self.length_penalty
if len(self) < self.num_beams or score > self.worst_score:
self.beams.append((score, hyp))
if len(self) > self.num_beams:
sorted_scores = sorted([(s, idx) for idx, (s, _) in enumerate(self.beams)])
del self.beams[sorted_scores[0][1]]
self.worst_score = sorted_scores[1][0]
else:
self.worst_score = min(score, self.worst_score)
def is_done(self, best_sum_logprobs, cur_len):
"""
If there are enough hypotheses and that none of the hypotheses being generated
can become better than the worst one in the heap, then we are done with this sentence.
"""
if len(self) < self.num_beams:
return False
elif self.early_stopping:
return True
else:
cur_score = best_sum_logprobs / cur_len ** self.length_penalty
ret = self.worst_score >= cur_score
return ret
================================================
FILE: bert/modeling_bert.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""PyTorch BERT model. """
import logging
import math
import os
import warnings
import torch
import torch.utils.checkpoint
from torch import nn
from torch.nn import CrossEntropyLoss, MSELoss
from .activations import gelu, gelu_new, swish
from .configuration_bert import BertConfig
from .file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_callable
from .modeling_utils import PreTrainedModel, find_pruneable_heads_and_indices, prune_linear_layer
logger = logging.getLogger(__name__)
_TOKENIZER_FOR_DOC = "BertTokenizer"
BERT_PRETRAINED_MODEL_ARCHIVE_LIST = [
"bert-base-uncased",
"bert-large-uncased",
"bert-base-cased",
"bert-large-cased",
"bert-base-multilingual-uncased",
"bert-base-multilingual-cased",
"bert-base-chinese",
"bert-base-german-cased",
"bert-large-uncased-whole-word-masking",
"bert-large-cased-whole-word-masking",
"bert-large-uncased-whole-word-masking-finetuned-squad",
"bert-large-cased-whole-word-masking-finetuned-squad",
"bert-base-cased-finetuned-mrpc",
"bert-base-german-dbmdz-cased",
"bert-base-german-dbmdz-uncased",
"cl-tohoku/bert-base-japanese",
"cl-tohoku/bert-base-japanese-whole-word-masking",
"cl-tohoku/bert-base-japanese-char",
"cl-tohoku/bert-base-japanese-char-whole-word-masking",
"TurkuNLP/bert-base-finnish-cased-v1",
"TurkuNLP/bert-base-finnish-uncased-v1",
"wietsedv/bert-base-dutch-cased",
# See all BERT models at https://huggingface.co/models?filter=bert
]
def load_tf_weights_in_bert(model, config, tf_checkpoint_path):
""" Load tf checkpoints in a pytorch model.
"""
try:
import re
import numpy as np
import tensorflow as tf
except ImportError:
logger.error(
"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
"https://www.tensorflow.org/install/ for installation instructions."
)
raise
tf_path = os.path.abspath(tf_checkpoint_path)
logger.info("Converting TensorFlow checkpoint from {}".format(tf_path))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
names = []
arrays = []
for name, shape in init_vars:
logger.info("Loading TF weight {} with shape {}".format(name, shape))
array = tf.train.load_variable(tf_path, name)
names.append(name)
arrays.append(array)
for name, array in zip(names, arrays):
name = name.split("/")
# adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
# which are not required for using pretrained model
if any(
n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
for n in name
):
logger.info("Skipping {}".format("/".join(name)))
continue
pointer = model
for m_name in name:
if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
scope_names = re.split(r"_(\d+)", m_name)
else:
scope_names = [m_name]
if scope_names[0] == "kernel" or scope_names[0] == "gamma":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
pointer = getattr(pointer, "bias")
elif scope_names[0] == "output_weights":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "squad":
pointer = getattr(pointer, "classifier")
else:
try:
pointer = getattr(pointer, scope_names[0])
except AttributeError:
logger.info("Skipping {}".format("/".join(name)))
continue
if len(scope_names) >= 2:
num = int(scope_names[1])
pointer = pointer[num]
if m_name[-11:] == "_embeddings":
pointer = getattr(pointer, "weight")
elif m_name == "kernel":
array = np.transpose(array)
try:
assert pointer.shape == array.shape
except AssertionError as e:
e.args += (pointer.shape, array.shape)
raise
logger.info("Initialize PyTorch weight {}".format(name))
pointer.data = torch.from_numpy(array)
return model
def mish(x):
return x * torch.tanh(nn.functional.softplus(x))
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new, "mish": mish}
BertLayerNorm = torch.nn.LayerNorm
class BertEmbeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings.
"""
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
if input_ids is not None:
input_shape = input_ids.size()
else:
input_shape = inputs_embeds.size()[:-1]
seq_length = input_shape[1]
device = input_ids.device if input_ids is not None else inputs_embeds.device
if position_ids is None:
position_ids = torch.arange(seq_length, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).expand(input_shape)
if token_type_ids is None:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
if inputs_embeds is None:
inputs_embeds = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
return embeddings
class BertSelfAttention(nn.Module):
def __init__(self, config):
super().__init__()
if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
raise ValueError(
"The hidden size (%d) is not a multiple of the number of attention "
"heads (%d)" % (config.hidden_size, config.num_attention_heads)
)
self.num_attention_heads = config.num_attention_heads
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.query = nn.Linear(config.hidden_size, self.all_head_size)
self.key = nn.Linear(config.hidden_size, self.all_head_size)
self.value = nn.Linear(config.hidden_size, self.all_head_size)
self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
def transpose_for_scores(self, x):
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(*new_x_shape)
return x.permute(0, 2, 1, 3)
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
):
mixed_query_layer = self.query(hidden_states)
# If this is instantiated as a cross-attention module, the keys
# and values come from an encoder; the attention mask needs to be
# such that the encoder's padding tokens are not attended to.
if encoder_hidden_states is not None:
mixed_key_layer = self.key(encoder_hidden_states)
mixed_value_layer = self.value(encoder_hidden_states)
attention_mask = encoder_attention_mask
else:
mixed_key_layer = self.key(hidden_states)
mixed_value_layer = self.value(hidden_states)
query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
value_layer = self.transpose_for_scores(mixed_value_layer)
# Take the dot product between "query" and "key" to get the raw attention scores.
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
if attention_mask is not None:
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
# Normalize the attention scores to probabilities.
attention_probs = nn.Softmax(dim=-1)(attention_scores)
# This is actually dropping out entire tokens to attend to, which might
# seem a bit unusual, but is taken from the original Transformer paper.
attention_probs = self.dropout(attention_probs)
# Mask heads if we want to
if head_mask is not None:
attention_probs = attention_probs * head_mask
context_layer = torch.matmul(attention_probs, value_layer)
context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.view(*new_context_layer_shape)
outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
return outputs
class BertSelfOutput(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states, input_tensor):
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
hidden_states = self.LayerNorm(hidden_states + input_tensor)
return hidden_states
class BertAttention(nn.Module):
def __init__(self, config):
super().__init__()
self.self = BertSelfAttention(config)
self.output = BertSelfOutput(config)
self.pruned_heads = set()
def prune_heads(self, heads):
if len(heads) == 0:
return
heads, index = find_pruneable_heads_and_indices(
heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
)
# Prune linear layers
self.self.query = prune_linear_layer(self.self.query, index)
self.self.key = prune_linear_layer(self.self.key, index)
self.self.value = prune_linear_layer(self.self.value, index)
self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
# Update hyper params and store pruned heads
self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
self.pruned_heads = self.pruned_heads.union(heads)
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
):
self_outputs = self.self(
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
)
attention_output = self.output(self_outputs[0], hidden_states)
outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
return outputs
class BertIntermediate(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
if isinstance(config.hidden_act, str):
self.intermediate_act_fn = ACT2FN[config.hidden_act]
else:
self.intermediate_act_fn = config.hidden_act
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.intermediate_act_fn(hidden_states)
return hidden_states
class BertOutput(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states, input_tensor):
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
hidden_states = self.LayerNorm(hidden_states + input_tensor)
return hidden_states
class BertLayer(nn.Module):
def __init__(self, config):
super().__init__()
self.attention = BertAttention(config)
self.is_decoder = config.is_decoder
if self.is_decoder:
self.crossattention = BertAttention(config)
self.intermediate = BertIntermediate(config)
self.output = BertOutput(config)
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
):
self_attention_outputs = self.attention(
hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
)
attention_output = self_attention_outputs[0]
outputs = self_attention_outputs[1:] # add self attentions if we output attention weights
if self.is_decoder and encoder_hidden_states is not None:
cross_attention_outputs = self.crossattention(
attention_output,
attention_mask,
head_mask,
encoder_hidden_states,
encoder_attention_mask,
output_attentions,
)
attention_output = cross_attention_outputs[0]
outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights
intermediate_output = self.intermediate(attention_output)
layer_output = self.output(intermediate_output, attention_output)
outputs = (layer_output,) + outputs
return outputs
class BertEncoder(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)])
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
output_hidden_states=False,
):
all_hidden_states = ()
all_attentions = ()
for i, layer_module in enumerate(self.layer):
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
if getattr(self.config, "gradient_checkpointing", False):
def create_custom_forward(module):
def custom_forward(*inputs):
return module(*inputs, output_attentions)
return custom_forward
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(layer_module),
hidden_states,
attention_mask,
head_mask[i],
encoder_hidden_states,
encoder_attention_mask,
)
else:
layer_outputs = layer_module(
hidden_states,
attention_mask,
head_mask[i],
encoder_hidden_states,
encoder_attention_mask,
output_attentions,
)
hidden_states = layer_outputs[0]
if output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
# Add last layer
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
outputs = (hidden_states,)
if output_hidden_states:
outputs = outputs + (all_hidden_states,)
if output_attentions:
outputs = outputs + (all_attentions,)
return outputs # last-layer hidden state, (all hidden states), (all attentions)
class BertPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
class BertPredictionHeadTransform(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
if isinstance(config.hidden_act, str):
self.transform_act_fn = ACT2FN[config.hidden_act]
else:
self.transform_act_fn = config.hidden_act
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.transform_act_fn(hidden_states)
hidden_states = self.LayerNorm(hidden_states)
return hidden_states
class BertLMPredictionHead(nn.Module):
def __init__(self, config):
super().__init__()
self.transform = BertPredictionHeadTransform(config)
# The output weights are the same as the input embeddings, but there is
# an output-only bias for each token.
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
# Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
self.decoder.bias = self.bias
def forward(self, hidden_states):
hidden_states = self.transform(hidden_states)
hidden_states = self.decoder(hidden_states)
return hidden_states
class BertOnlyMLMHead(nn.Module):
def __init__(self, config):
super().__init__()
self.predictions = BertLMPredictionHead(config)
def forward(self, sequence_output):
prediction_scores = self.predictions(sequence_output)
return prediction_scores
class BertOnlyNSPHead(nn.Module):
def __init__(self, config):
super().__init__()
self.seq_relationship = nn.Linear(config.hidden_size, 2)
def forward(self, pooled_output):
seq_relationship_score = self.seq_relationship(pooled_output)
return seq_relationship_score
class BertPreTrainingHeads(nn.Module):
def __init__(self, config):
super().__init__()
self.predictions = BertLMPredictionHead(config)
self.seq_relationship = nn.Linear(config.hidden_size, 2)
def forward(self, sequence_output, pooled_output):
prediction_scores = self.predictions(sequence_output)
seq_relationship_score = self.seq_relationship(pooled_output)
return prediction_scores, seq_relationship_score
class BertPreTrainedModel(PreTrainedModel):
""" An abstract class to handle weights initialization and
a simple interface for downloading and loading pretrained models.
"""
config_class = BertConfig
load_tf_weights = load_tf_weights_in_bert
base_model_prefix = "bert"
def _init_weights(self, module):
""" Initialize the weights """
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
elif isinstance(module, BertLayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
BERT_START_DOCSTRING = r"""
This model is a PyTorch `torch.nn.Module `_ sub-class.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general
usage and behavior.
Parameters:
config (:class:`~transformers.BertConfig`): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the configuration.
Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights.
"""
BERT_INPUTS_DOCSTRING = r"""
Args:
input_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using :class:`transformers.BertTokenizer`.
See :func:`transformers.PreTrainedTokenizer.encode` and
:func:`transformers.PreTrainedTokenizer.__call__` for details.
`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`{0}`, `optional`, defaults to :obj:`None`):
Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
`What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`, defaults to :obj:`None`):
Segment token indices to indicate first and second portions of the inputs.
Indices are selected in ``[0, 1]``: ``0`` corresponds to a `sentence A` token, ``1``
corresponds to a `sentence B` token
`What are token type IDs? <../glossary.html#token-type-ids>`_
position_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`, defaults to :obj:`None`):
Indices of positions of each input sequence tokens in the position embeddings.
Selected in the range ``[0, config.max_position_embeddings - 1]``.
`What are position IDs? <../glossary.html#position-ids>`_
head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`, defaults to :obj:`None`):
Mask to nullify selected heads of the self-attention modules.
Mask values selected in ``[0, 1]``:
:obj:`1` indicates the head is **not masked**, :obj:`0` indicates the head is **masked**.
inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`, defaults to :obj:`None`):
Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_ids` indices into associated vectors
than the model's internal embedding lookup matrix.
encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`, defaults to :obj:`None`):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask
is used in the cross-attention if the model is configured as a decoder.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
output_attentions (:obj:`bool`, `optional`, defaults to :obj:`None`):
If set to ``True``, the attentions tensors of all attention layers are returned. See ``attentions`` under returned tensors for more detail.
"""
@add_start_docstrings(
"The bare Bert Model transformer outputting raw hidden-states without any specific head on top.",
BERT_START_DOCSTRING,
)
class BertModel(BertPreTrainedModel):
"""
The model can behave as an encoder (with only self-attention) as well
as a decoder, in which case a layer of cross-attention is added between
the self-attention layers, following the architecture described in `Attention is all you need`_ by Ashish Vaswani,
Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the
:obj:`is_decoder` argument of the configuration set to :obj:`True`; an
:obj:`encoder_hidden_states` is expected as an input to the forward pass.
.. _`Attention is all you need`:
https://arxiv.org/abs/1706.03762
"""
def __init__(self, config):
super().__init__(config)
self.config = config
self.embeddings = BertEmbeddings(config)
self.encoder = BertEncoder(config)
self.pooler = BertPooler(config)
self.init_weights()
def get_input_embeddings(self):
return self.embeddings.word_embeddings
def set_input_embeddings(self, value):
self.embeddings.word_embeddings = value
def _prune_heads(self, heads_to_prune):
""" Prunes heads of the model.
heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
See base class PreTrainedModel
"""
for layer, heads in heads_to_prune.items():
self.encoder.layer[layer].attention.prune_heads(heads)
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
Return:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (:obj:`torch.FloatTensor`: of shape :obj:`(batch_size, hidden_size)`):
Last layer hidden-state of the first token of the sequence (classification token)
further processed by a Linear layer and a Tanh activation function. The Linear
layer weights are trained from the next sentence prediction (classification)
objective during pre-training.
This output is usually *not* a good summary
of the semantic content of the input, you're often better with averaging or pooling
the sequence of hidden-states for the whole input sequence.
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
elif input_ids is not None:
input_shape = input_ids.size()
elif inputs_embeds is not None:
input_shape = inputs_embeds.size()[:-1]
else:
raise ValueError("You have to specify either input_ids or inputs_embeds")
device = input_ids.device if input_ids is not None else inputs_embeds.device
if attention_mask is None:
attention_mask = torch.ones(input_shape, device=device)
if token_type_ids is None:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
# If a 2D ou 3D attention mask is provided for the cross-attention
# we need to make broadcastabe to [batch_size, num_heads, seq_length, seq_length]
if self.config.is_decoder and encoder_hidden_states is not None:
encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
if encoder_attention_mask is None:
encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
else:
encoder_extended_attention_mask = None
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
# attention_probs has shape bsz x n_heads x N x N
# input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
# and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
embedding_output = self.embeddings(
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
)
encoder_outputs = self.encoder(
embedding_output,
attention_mask=extended_attention_mask,
head_mask=head_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_extended_attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output = encoder_outputs[0]
pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
outputs = (sequence_output, pooled_output,) + encoder_outputs[
1:
] # add hidden_states and attentions if they are here
return outputs # sequence_output, pooled_output, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model with two heads on top as done during the pre-training: a `masked language modeling` head and
a `next sentence prediction (classification)` head. """,
BERT_START_DOCSTRING,
)
class BertForPreTraining(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.bert = BertModel(config)
self.cls = BertPreTrainingHeads(config)
self.init_weights()
def get_output_embeddings(self):
return self.cls.predictions.decoder
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
next_sentence_label=None,
output_attentions=None,
output_hidden_states=None,
**kwargs
):
r"""
labels (``torch.LongTensor`` of shape ``(batch_size, sequence_length)``, `optional`, defaults to :obj:`None`):
Labels for computing the masked language modeling loss.
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels
in ``[0, ..., config.vocab_size]``
next_sentence_label (``torch.LongTensor`` of shape ``(batch_size,)``, `optional`, defaults to :obj:`None`):
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see :obj:`input_ids` docstring)
Indices should be in ``[0, 1]``.
``0`` indicates sequence B is a continuation of sequence A,
``1`` indicates sequence B is a random sequence.
kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`):
Used to hide legacy arguments that have been deprecated.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
prediction_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`)
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, 2)`):
Prediction scores of the next sequence prediction (classification) head (scores of True/False
continuation before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Examples::
>>> from transformers import BertTokenizer, BertForPreTraining
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = BertForPreTraining.from_pretrained('bert-base-uncased')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> prediction_scores, seq_relationship_scores = outputs[:2]
"""
if "masked_lm_labels" in kwargs:
warnings.warn(
"The `masked_lm_labels` argument is deprecated and will be removed in a future version, use `labels` instead.",
DeprecationWarning,
)
labels = kwargs.pop("masked_lm_labels")
assert kwargs == {}, f"Unexpected keyword arguments: {list(kwargs.keys())}."
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output, pooled_output = outputs[:2]
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
outputs = (prediction_scores, seq_relationship_score,) + outputs[
2:
] # add hidden states and attention if they are here
if labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
outputs = (total_loss,) + outputs
return outputs # (loss), prediction_scores, seq_relationship_score, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model with a `language modeling` head on top for CLM fine-tuning. """, BERT_START_DOCSTRING
)
class BertLMHeadModel(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
assert config.is_decoder, "If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True`."
self.bert = BertModel(config)
self.cls = BertOnlyMLMHead(config)
self.init_weights()
def get_output_embeddings(self):
return self.cls.predictions.decoder
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=None,
output_hidden_states=None,
**kwargs
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the left-to-right language modeling loss (next word prediction).
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels
in ``[0, ..., config.vocab_size]``
kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`):
Used to hide legacy arguments that have been deprecated.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
ltr_lm_loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided):
Next token prediction loss.
prediction_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`)
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example::
>>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
>>> config = BertConfig.from_pretrained("bert-base-cased")
>>> config.is_decoder = True
>>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config)
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
"""
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output = outputs[0]
prediction_scores = self.cls(sequence_output)
outputs = (prediction_scores,) + outputs[2:] # Add hidden states and attention if they are here
if labels is not None:
# we are doing next-token prediction; shift prediction scores and input ids by one
prediction_scores = prediction_scores[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss_fct = CrossEntropyLoss()
ltr_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
outputs = (ltr_lm_loss,) + outputs
return outputs # (ltr_lm_loss), prediction_scores, (hidden_states), (attentions)
def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):
input_shape = input_ids.shape
# if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
if attention_mask is None:
attention_mask = input_ids.new_ones(input_shape)
return {"input_ids": input_ids, "attention_mask": attention_mask}
@add_start_docstrings("""Bert Model with a `language modeling` head on top. """, BERT_START_DOCSTRING)
class BertForMaskedLM(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
assert (
not config.is_decoder
), "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for bi-directional self-attention."
self.bert = BertModel(config)
self.cls = BertOnlyMLMHead(config)
self.init_weights()
def get_output_embeddings(self):
return self.cls.predictions.decoder
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=None,
output_hidden_states=None,
**kwargs
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the masked language modeling loss.
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels
in ``[0, ..., config.vocab_size]``
kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`):
Used to hide legacy arguments that have been deprecated.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
masked_lm_loss (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Masked language modeling loss.
prediction_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`)
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
if "masked_lm_labels" in kwargs:
warnings.warn(
"The `masked_lm_labels` argument is deprecated and will be removed in a future version, use `labels` instead.",
DeprecationWarning,
)
labels = kwargs.pop("masked_lm_labels")
assert "lm_labels" not in kwargs, "Use `BertWithLMHead` for autoregressive language modeling task."
assert kwargs == {}, f"Unexpected keyword arguments: {list(kwargs.keys())}."
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output = outputs[0]
prediction_scores = self.cls(sequence_output)
outputs = (prediction_scores,) + outputs[2:] # Add hidden states and attention if they are here
if labels is not None:
loss_fct = CrossEntropyLoss() # -100 index = padding token
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
outputs = (masked_lm_loss,) + outputs
return outputs # (masked_lm_loss), prediction_scores, (hidden_states), (attentions)
def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):
input_shape = input_ids.shape
effective_batch_size = input_shape[0]
# add a dummy token
assert self.config.pad_token_id is not None, "The PAD token should be defined for generation"
attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1)
dummy_token = torch.full(
(effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device
)
input_ids = torch.cat([input_ids, dummy_token], dim=1)
return {"input_ids": input_ids, "attention_mask": attention_mask}
@add_start_docstrings(
"""Bert Model with a `next sentence prediction (classification)` head on top. """, BERT_START_DOCSTRING,
)
class BertForNextSentencePrediction(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.bert = BertModel(config)
self.cls = BertOnlyNSPHead(config)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
next_sentence_label=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
next_sentence_label (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`, defaults to :obj:`None`):
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see ``input_ids`` docstring)
Indices should be in ``[0, 1]``.
``0`` indicates sequence B is a continuation of sequence A,
``1`` indicates sequence B is a random sequence.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`next_sentence_label` is provided):
Next sequence prediction (classification) loss.
seq_relationship_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, 2)`):
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Examples::
>>> from transformers import BertTokenizer, BertForNextSentencePrediction
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> next_sentence = "The sky is blue due to the shorter wavelength of blue light."
>>> encoding = tokenizer(prompt, next_sentence, return_tensors='pt')
>>> loss, logits = model(**encoding, next_sentence_label=torch.LongTensor([1]))
>>> assert logits[0, 0] < logits[0, 1] # next sentence was random
"""
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
pooled_output = outputs[1]
seq_relationship_score = self.cls(pooled_output)
outputs = (seq_relationship_score,) + outputs[2:] # add hidden states and attention if they are here
if next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
outputs = (next_sentence_loss,) + outputs
return outputs # (next_sentence_loss), seq_relationship_score, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks. """,
BERT_START_DOCSTRING,
)
class BertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`, defaults to :obj:`None`):
Labels for computing the sequence classification/regression loss.
Indices should be in :obj:`[0, ..., config.num_labels - 1]`.
If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`label` is provided):
Classification (or regression if config.num_labels==1) loss.
logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, config.num_labels)`):
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """,
BERT_START_DOCSTRING,
)
class BertForMultipleChoice(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, 1)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, num_choices, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`, defaults to :obj:`None`):
Labels for computing the multiple choice classification loss.
Indices should be in ``[0, ..., num_choices-1]`` where `num_choices` is the size of the second dimension
of the input tensors. (see `input_ids` above)
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (:obj:`torch.FloatTensor` of shape `(1,)`, `optional`, returned when :obj:`labels` is provided):
Classification loss.
classification_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, num_choices)`):
`num_choices` is the second dimension of the input tensors. (see `input_ids` above).
Classification scores (before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None
position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
inputs_embeds = (
inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
if inputs_embeds is not None
else None
)
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
reshaped_logits = logits.view(-1, num_choices)
outputs = (reshaped_logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(reshaped_logits, labels)
outputs = (loss,) + outputs
return outputs # (loss), reshaped_logits, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """,
BERT_START_DOCSTRING,
)
class BertForTokenClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the token classification loss.
Indices should be in ``[0, ..., config.num_labels - 1]``.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when ``labels`` is provided) :
Classification loss.
scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.num_labels)`)
Classification scores (before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss_fct = CrossEntropyLoss()
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)
active_labels = torch.where(
active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
)
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), scores, (hidden_states), (attentions)
@add_start_docstrings(
"""Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """,
BERT_START_DOCSTRING,
)
class BertForQuestionAnswering(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
@add_code_sample_docstrings(tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-uncased")
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
start_positions=None,
end_positions=None,
output_attentions=None,
output_hidden_states=None,
):
r"""
start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`, defaults to :obj:`None`):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (`sequence_length`).
Position outside of the sequence are not taken into account for computing the loss.
end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`, defaults to :obj:`None`):
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (`sequence_length`).
Position outside of the sequence are not taken into account for computing the loss.
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided):
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length,)`):
Span-start scores (before SoftMax).
end_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length,)`):
Span-end scores (before SoftMax).
hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
of shape :obj:`(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
:obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
outputs = (start_logits, end_logits,) + outputs[2:]
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
outputs = (total_loss,) + outputs
return outputs # (loss), start_logits, end_logits, (hidden_states), (attentions)
================================================
FILE: bert/modeling_utils.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors, Facebook AI Research authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import logging
import os
from typing import Callable, Dict, List, Optional, Tuple
import torch
from torch import Tensor, device, dtype, nn
from torch.nn import CrossEntropyLoss
from torch.nn import functional as F
from .activations import get_activation
from .configuration_utils import PretrainedConfig
from .file_utils import (
DUMMY_INPUTS,
TF2_WEIGHTS_NAME,
TF_WEIGHTS_NAME,
WEIGHTS_NAME,
cached_path,
hf_bucket_url,
is_remote_url,
)
from .generation_utils import GenerationMixin
logger = logging.getLogger(__name__)
try:
from torch.nn import Identity
except ImportError:
# Older PyTorch compatibility
class Identity(nn.Module):
r"""A placeholder identity operator that is argument-insensitive.
"""
def __init__(self, *args, **kwargs):
super().__init__()
def forward(self, input):
return input
def find_pruneable_heads_and_indices(
heads: List, n_heads: int, head_size: int, already_pruned_heads: set
) -> Tuple[set, "torch.LongTensor"]:
mask = torch.ones(n_heads, head_size)
heads = set(heads) - already_pruned_heads # Convert to set and remove already pruned heads
for head in heads:
# Compute how many pruned heads are before the head and move the index accordingly
head = head - sum(1 if h < head else 0 for h in already_pruned_heads)
mask[head] = 0
mask = mask.view(-1).contiguous().eq(1)
index: torch.LongTensor = torch.arange(len(mask))[mask].long()
return heads, index
class ModuleUtilsMixin:
"""
A few utilities for torch.nn.Modules, to be used as a mixin.
"""
def num_parameters(self, only_trainable: bool = False) -> int:
"""
Get number of (optionally, trainable) parameters in the module.
"""
params = filter(lambda x: x.requires_grad, self.parameters()) if only_trainable else self.parameters()
return sum(p.numel() for p in params)
@staticmethod
def _hook_rss_memory_pre_forward(module, *args, **kwargs):
try:
import psutil
except (ImportError):
raise ImportError("You need to install psutil (pip install psutil) to use memory tracing.")
process = psutil.Process(os.getpid())
mem = process.memory_info()
module.mem_rss_pre_forward = mem.rss
return None
@staticmethod
def _hook_rss_memory_post_forward(module, *args, **kwargs):
try:
import psutil
except (ImportError):
raise ImportError("You need to install psutil (pip install psutil) to use memory tracing.")
process = psutil.Process(os.getpid())
mem = process.memory_info()
module.mem_rss_post_forward = mem.rss
mem_rss_diff = module.mem_rss_post_forward - module.mem_rss_pre_forward
module.mem_rss_diff = mem_rss_diff + (module.mem_rss_diff if hasattr(module, "mem_rss_diff") else 0)
return None
def add_memory_hooks(self):
""" Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.
Increase in memory consumption is stored in a `mem_rss_diff` attribute for each module and can be reset to zero with `model.reset_memory_hooks_state()`
"""
for module in self.modules():
module.register_forward_pre_hook(self._hook_rss_memory_pre_forward)
module.register_forward_hook(self._hook_rss_memory_post_forward)
self.reset_memory_hooks_state()
def reset_memory_hooks_state(self):
for module in self.modules():
module.mem_rss_diff = 0
module.mem_rss_post_forward = 0
module.mem_rss_pre_forward = 0
@property
def device(self) -> device:
"""
Get torch.device from module, assuming that the whole module has one device.
"""
try:
return next(self.parameters()).device
except StopIteration:
# For nn.DataParallel compatibility in PyTorch 1.5
def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
return tuples
gen = self._named_members(get_members_fn=find_tensor_attributes)
first_tuple = next(gen)
return first_tuple[1].device
@property
def dtype(self) -> dtype:
"""
Get torch.dtype from module, assuming that the whole module has one dtype.
"""
try:
return next(self.parameters()).dtype
except StopIteration:
# For nn.DataParallel compatibility in PyTorch 1.5
def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
return tuples
gen = self._named_members(get_members_fn=find_tensor_attributes)
first_tuple = next(gen)
return first_tuple[1].dtype
def invert_attention_mask(self, encoder_attention_mask: Tensor) -> Tensor:
"""type: torch.Tensor -> torch.Tensor"""
if encoder_attention_mask.dim() == 3:
encoder_extended_attention_mask = encoder_attention_mask[:, None, :, :]
if encoder_attention_mask.dim() == 2:
encoder_extended_attention_mask = encoder_attention_mask[:, None, None, :]
# T5 has a mask that can compare sequence ids, we can simulate this here with this transposition
# Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow
# /transformer/transformer_layers.py#L270
# encoder_extended_attention_mask = (encoder_extended_attention_mask ==
# encoder_extended_attention_mask.transpose(-1, -2))
encoder_extended_attention_mask = encoder_extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
if self.dtype == torch.float16:
encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e4
elif self.dtype == torch.float32:
encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e9
else:
raise ValueError(
"{} not recognized. `dtype` should be set to either `torch.float32` or `torch.float16`".format(
self.dtype
)
)
return encoder_extended_attention_mask
def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple, device: device) -> Tensor:
"""Makes broadcastable attention mask and causal mask so that future and maked tokens are ignored.
Arguments:
attention_mask: torch.Tensor with 1 indicating tokens to ATTEND to
input_shape: tuple, shape of input_ids
device: torch.Device, usually self.device
Returns:
torch.Tensor with dtype of attention_mask.dtype
"""
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
if attention_mask.dim() == 3:
extended_attention_mask = attention_mask[:, None, :, :]
elif attention_mask.dim() == 2:
# Provided a padding mask of dimensions [batch_size, seq_length]
# - if the model is a decoder, apply a causal mask in addition to the padding mask
# - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
if self.config.is_decoder:
batch_size, seq_length = input_shape
seq_ids = torch.arange(seq_length, device=device)
causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None]
# causal and attention masks must have same type with pytorch version < 1.3
causal_mask = causal_mask.to(attention_mask.dtype)
extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
else:
extended_attention_mask = attention_mask[:, None, None, :]
else:
raise ValueError(
"Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(
input_shape, attention_mask.shape
)
)
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
return extended_attention_mask
def get_head_mask(self, head_mask: Tensor, num_hidden_layers: int, is_attention_chunked: bool = False) -> Tensor:
"""
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
attention_probs has shape bsz x n_heads x N x N
Arguments:
head_mask: torch.Tensor or None: has shape [num_heads] or [num_hidden_layers x num_heads]
num_hidden_layers: int
Returns:
Tensor of shape shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
or list with [None] for each layer
"""
if head_mask is not None:
head_mask = self._convert_head_mask_to_5d(head_mask, num_hidden_layers)
if is_attention_chunked is True:
head_mask = head_mask.unsqueeze(-1)
else:
head_mask = [None] * num_hidden_layers
return head_mask
def _convert_head_mask_to_5d(self, head_mask, num_hidden_layers):
"""-> [num_hidden_layers x batch x num_heads x seq_length x seq_length]"""
if head_mask.dim() == 1:
head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
head_mask = head_mask.expand(num_hidden_layers, -1, -1, -1, -1)
elif head_mask.dim() == 2:
head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer
assert head_mask.dim() == 5, f"head_mask.dim != 5, instead {head_mask.dim()}"
head_mask = head_mask.to(dtype=self.dtype) # switch to fload if need + fp16 compatibility
return head_mask
class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin):
r""" Base class for all models.
:class:`~transformers.PreTrainedModel` takes care of storing the configuration of the models and handles methods for loading/downloading/saving models
as well as a few methods common to all models to (i) resize the input embeddings and (ii) prune heads in the self-attention heads.
Class attributes (overridden by derived classes):
- ``config_class``: a class derived from :class:`~transformers.PretrainedConfig` to use as configuration class for this model architecture.
- ``load_tf_weights``: a python ``method`` for loading a TensorFlow checkpoint in a PyTorch model, taking as arguments:
- ``model``: an instance of the relevant subclass of :class:`~transformers.PreTrainedModel`,
- ``config``: an instance of the relevant subclass of :class:`~transformers.PretrainedConfig`,
- ``path``: a path (string) to the TensorFlow checkpoint.
- ``base_model_prefix``: a string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model.
"""
config_class = None
base_model_prefix = ""
@property
def dummy_inputs(self):
""" Dummy inputs to do a forward pass in the network.
Returns:
torch.Tensor with dummy inputs
"""
return {"input_ids": torch.tensor(DUMMY_INPUTS)}
def __init__(self, config, *inputs, **kwargs):
super().__init__()
if not isinstance(config, PretrainedConfig):
raise ValueError(
"Parameter config in `{}(config)` should be an instance of class `PretrainedConfig`. "
"To create a model from a pretrained model use "
"`model = {}.from_pretrained(PRETRAINED_MODEL_NAME)`".format(
self.__class__.__name__, self.__class__.__name__
)
)
# Save config in model
self.config = config
@property
def base_model(self):
return getattr(self, self.base_model_prefix, self)
def get_input_embeddings(self):
"""
Returns the model's input embeddings.
Returns:
:obj:`nn.Module`:
A torch module mapping vocabulary to hidden states.
"""
base_model = getattr(self, self.base_model_prefix, self)
if base_model is not self:
return base_model.get_input_embeddings()
else:
raise NotImplementedError
def set_input_embeddings(self, value: nn.Module):
"""
Set model's input embeddings
Args:
value (:obj:`nn.Module`):
A module mapping vocabulary to hidden states.
"""
base_model = getattr(self, self.base_model_prefix, self)
if base_model is not self:
base_model.set_input_embeddings(value)
else:
raise NotImplementedError
def get_output_embeddings(self):
"""
Returns the model's output embeddings.
Returns:
:obj:`nn.Module`:
A torch module mapping hidden states to vocabulary.
"""
return None # Overwrite for models with output embeddings
def tie_weights(self):
"""
Tie the weights between the input embeddings and the output embeddings.
If the `torchscript` flag is set in the configuration, can't handle parameter sharing so we are cloning
the weights instead.
"""
output_embeddings = self.get_output_embeddings()
if output_embeddings is not None:
self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())
def _tie_or_clone_weights(self, output_embeddings, input_embeddings):
""" Tie or clone module weights depending of whether we are using TorchScript or not
"""
if self.config.torchscript:
output_embeddings.weight = nn.Parameter(input_embeddings.weight.clone())
else:
output_embeddings.weight = input_embeddings.weight
if getattr(output_embeddings, "bias", None) is not None:
output_embeddings.bias.data = torch.nn.functional.pad(
output_embeddings.bias.data,
(0, output_embeddings.weight.shape[0] - output_embeddings.bias.shape[0],),
"constant",
0,
)
if hasattr(output_embeddings, "out_features") and hasattr(input_embeddings, "num_embeddings"):
output_embeddings.out_features = input_embeddings.num_embeddings
def resize_token_embeddings(self, new_num_tokens: Optional[int] = None):
""" Resize input token embeddings matrix of the model if new_num_tokens != config.vocab_size.
Take care of tying weights embeddings afterwards if the model class has a `tie_weights()` method.
Arguments:
new_num_tokens: (`optional`) int:
New number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end.
If not provided or None: does nothing and just returns a pointer to the input tokens ``torch.nn.Embeddings`` Module of the model.
Return: ``torch.nn.Embeddings``
Pointer to the input tokens Embeddings Module of the model
"""
base_model = getattr(self, self.base_model_prefix, self) # get the base model if needed
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
if new_num_tokens is None:
return model_embeds
# Update base model and current model config
self.config.vocab_size = new_num_tokens
base_model.vocab_size = new_num_tokens
# Tie weights again if needed
self.tie_weights()
return model_embeds
def _resize_token_embeddings(self, new_num_tokens):
old_embeddings = self.get_input_embeddings()
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
self.set_input_embeddings(new_embeddings)
return self.get_input_embeddings()
def _get_resized_embeddings(
self, old_embeddings: torch.nn.Embedding, new_num_tokens: Optional[int] = None
) -> torch.nn.Embedding:
""" Build a resized Embedding Module from a provided token Embedding Module.
Increasing the size will add newly initialized vectors at the end
Reducing the size will remove vectors from the end
Args:
old_embeddings: ``torch.nn.Embedding``
Old embeddings to be resized.
new_num_tokens: (`optional`) int
New number of tokens in the embedding matrix.
Increasing the size will add newly initialized vectors at the end
Reducing the size will remove vectors from the end
If not provided or None: return the provided token Embedding Module.
Return: ``torch.nn.Embedding``
Pointer to the resized Embedding Module or the old Embedding Module if new_num_tokens is None
"""
if new_num_tokens is None:
return old_embeddings
old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
if old_num_tokens == new_num_tokens:
return old_embeddings
# Build new embeddings
new_embeddings = nn.Embedding(new_num_tokens, old_embedding_dim)
new_embeddings.to(old_embeddings.weight.device)
# initialize all new embeddings (in particular added tokens)
self._init_weights(new_embeddings)
# Copy token embeddings from the previous weights
num_tokens_to_copy = min(old_num_tokens, new_num_tokens)
new_embeddings.weight.data[:num_tokens_to_copy, :] = old_embeddings.weight.data[:num_tokens_to_copy, :]
return new_embeddings
def init_weights(self):
""" Initialize and prunes weights if needed. """
# Initialize weights
self.apply(self._init_weights)
# Prune heads if needed
if self.config.pruned_heads:
self.prune_heads(self.config.pruned_heads)
# Tie weights if needed
self.tie_weights()
def prune_heads(self, heads_to_prune: Dict):
""" Prunes heads of the base model.
Arguments:
heads_to_prune: dict with keys being selected layer indices (`int`) and associated values being the list of heads to prune in said layer (list of `int`).
E.g. {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
"""
# save new sets of pruned heads as union of previously stored pruned heads and newly pruned heads
for layer, heads in heads_to_prune.items():
union_heads = set(self.config.pruned_heads.get(layer, [])) | set(heads)
self.config.pruned_heads[layer] = list(union_heads) # Unfortunately we have to store it as list for JSON
self.base_model._prune_heads(heads_to_prune)
def save_pretrained(self, save_directory):
""" Save a model and its configuration file to a directory, so that it
can be re-loaded using the `:func:`~transformers.PreTrainedModel.from_pretrained`` class method.
Arguments:
save_directory: directory to which to save.
"""
if os.path.isfile(save_directory):
logger.error("Provided path ({}) should be a directory, not a file".format(save_directory))
return
os.makedirs(save_directory, exist_ok=True)
# Only save the model itself if we are using distributed training
model_to_save = self.module if hasattr(self, "module") else self
# Attach architecture to the config
model_to_save.config.architectures = [model_to_save.__class__.__name__]
# If we save using the predefined names, we can load using `from_pretrained`
output_model_file = os.path.join(save_directory, WEIGHTS_NAME)
if getattr(self.config, "xla_device", False):
import torch_xla.core.xla_model as xm
if xm.is_master_ordinal():
# Save configuration file
model_to_save.config.save_pretrained(save_directory)
# xm.save takes care of saving only from master
xm.save(model_to_save.state_dict(), output_model_file)
else:
model_to_save.config.save_pretrained(save_directory)
torch.save(model_to_save.state_dict(), output_model_file)
logger.info("Model weights saved in {}".format(output_model_file))
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
r"""Instantiate a pretrained pytorch model from a pre-trained model configuration.
The model is set in evaluation mode by default using ``model.eval()`` (Dropout modules are deactivated)
To train the model, you should first set it back in training mode with ``model.train()``
The warning ``Weights from XXX not initialized from pretrained model`` means that the weights of XXX do not come pre-trained with the rest of the model.
It is up to you to train those weights with a downstream fine-tuning task.
The warning ``Weights from XXX not used in YYY`` means that the layer XXX is not used by YYY, therefore those weights are discarded.
Parameters:
pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a string with the `identifier name` of a pre-trained model that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- a path to a `directory` containing model weights saved using :func:`~transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
- a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- None if you are both providing the configuration and state dictionary (resp. with keyword arguments ``config`` and ``state_dict``)
model_args: (`optional`) Sequence of positional arguments:
All remaning positional arguments will be passed to the underlying model's ``__init__`` method
config: (`optional`) one of:
- an instance of a class derived from :class:`~transformers.PretrainedConfig`, or
- a string valid as input to :func:`~transformers.PretrainedConfig.from_pretrained()`
Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
- the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
- the model was saved using :func:`~transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
state_dict: (`optional`) dict:
an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights.
In this case though, you should check if using :func:`~transformers.PreTrainedModel.save_pretrained` and :func:`~transformers.PreTrainedModel.from_pretrained` is not a simpler option.
cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used.
force_download: (`optional`) boolean, default False:
Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
resume_download: (`optional`) boolean, default False:
Do not delete incompletely recieved file. Attempt to resume the download if such a file exists.
proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
output_loading_info: (`optional`) boolean:
Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
kwargs: (`optional`) Remaining dictionary of keyword arguments:
Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
- If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
Examples::
# For example purposes. Not runnable.
model = BertModel.from_pretrained('bert-base-uncased') # Download model and configuration from S3 and cache.
model = BertModel.from_pretrained('./test/saved_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = BertModel.from_pretrained('bert-base-uncased', output_attention=True) # Update configuration during loading
assert model.config.output_attention == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json')
model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
config = kwargs.pop("config", None)
state_dict = kwargs.pop("state_dict", None)
cache_dir = kwargs.pop("cache_dir", None)
from_tf = kwargs.pop("from_tf", False)
force_download = kwargs.pop("force_download", False)
resume_download = kwargs.pop("resume_download", False)
proxies = kwargs.pop("proxies", None)
output_loading_info = kwargs.pop("output_loading_info", False)
local_files_only = kwargs.pop("local_files_only", False)
use_cdn = kwargs.pop("use_cdn", True)
# Load config if we don't provide a configuration
if not isinstance(config, PretrainedConfig):
config_path = config if config is not None else pretrained_model_name_or_path
config, model_kwargs = cls.config_class.from_pretrained(
config_path,
*model_args,
cache_dir=cache_dir,
return_unused_kwargs=True,
force_download=force_download,
resume_download=resume_download,
proxies=proxies,
local_files_only=local_files_only,
**kwargs,
)
else:
model_kwargs = kwargs
# Load model
if pretrained_model_name_or_path is not None:
if os.path.isdir(pretrained_model_name_or_path):
if from_tf and os.path.isfile(os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index")):
# Load from a TF 1.0 checkpoint
archive_file = os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index")
elif from_tf and os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)):
# Load from a TF 2.0 checkpoint
archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)
elif os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)):
# Load from a PyTorch checkpoint
archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)
else:
raise EnvironmentError(
"Error no file named {} found in directory {} or `from_tf` set to False".format(
[WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME + ".index"],
pretrained_model_name_or_path,
)
)
elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
archive_file = pretrained_model_name_or_path
elif os.path.isfile(pretrained_model_name_or_path + ".index"):
assert (
from_tf
), "We found a TensorFlow checkpoint at {}, please set from_tf to True to load from this checkpoint".format(
pretrained_model_name_or_path + ".index"
)
archive_file = pretrained_model_name_or_path + ".index"
else:
archive_file = hf_bucket_url(
pretrained_model_name_or_path,
filename=(TF2_WEIGHTS_NAME if from_tf else WEIGHTS_NAME),
use_cdn=use_cdn,
)
try:
# Load from URL or cache if already cached
resolved_archive_file = cached_path(
archive_file,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
local_files_only=local_files_only,
)
if resolved_archive_file is None:
raise EnvironmentError
except EnvironmentError:
msg = (
f"Can't load weights for '{pretrained_model_name_or_path}'. Make sure that:\n\n"
f"- '{pretrained_model_name_or_path}' is a correct model identifier listed on 'https://huggingface.co/models'\n\n"
f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n"
)
raise EnvironmentError(msg)
if resolved_archive_file == archive_file:
logger.info("loading weights file {}".format(archive_file))
else:
logger.info("loading weights file {} from cache at {}".format(archive_file, resolved_archive_file))
else:
resolved_archive_file = None
# Instantiate model.
model = cls(config, *model_args, **model_kwargs)
if state_dict is None and not from_tf:
try:
state_dict = torch.load(resolved_archive_file, map_location="cpu")
except Exception:
raise OSError(
"Unable to load weights from pytorch checkpoint file. "
"If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
)
missing_keys = []
unexpected_keys = []
error_msgs = []
if from_tf:
if resolved_archive_file.endswith(".index"):
# Load from a TensorFlow 1.X checkpoint - provided by original authors
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
else:
# Load from our TensorFlow 2.0 checkpoints
try:
from transformers import load_tf2_checkpoint_in_pytorch_model
model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
except ImportError:
logger.error(
"Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed. Please see "
"https://pytorch.org/ and https://www.tensorflow.org/install/ for installation instructions."
)
raise
else:
# Convert old format to new format if needed from a PyTorch state_dict
old_keys = []
new_keys = []
for key in state_dict.keys():
new_key = None
if "gamma" in key:
new_key = key.replace("gamma", "weight")
if "beta" in key:
new_key = key.replace("beta", "bias")
if new_key:
old_keys.append(key)
new_keys.append(new_key)
for old_key, new_key in zip(old_keys, new_keys):
state_dict[new_key] = state_dict.pop(old_key)
# copy state_dict so _load_from_state_dict can modify it
metadata = getattr(state_dict, "_metadata", None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
##############################################################################################
# Print out state_dict's contents: keys
'''
for key, _ in state_dict.items():
print(key)
'''
##############################################################################################
# PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants
# so we need to apply the function recursively.
def load(module: nn.Module, prefix=""):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
module._load_from_state_dict(
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs,
)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + ".")
# Make sure we are able to load base models as well as derived models (with heads)
start_prefix = ""
model_to_load = model
has_prefix_module = any(s.startswith(cls.base_model_prefix) for s in state_dict.keys())
if not hasattr(model, cls.base_model_prefix) and has_prefix_module:
start_prefix = cls.base_model_prefix + "."
if hasattr(model, cls.base_model_prefix) and not has_prefix_module:
model_to_load = getattr(model, cls.base_model_prefix)
load(model_to_load, prefix=start_prefix)
if model.__class__.__name__ != model_to_load.__class__.__name__:
base_model_state_dict = model_to_load.state_dict().keys()
head_model_state_dict_without_base_prefix = [
key.split(cls.base_model_prefix + ".")[-1] for key in model.state_dict().keys()
]
missing_keys.extend(head_model_state_dict_without_base_prefix - base_model_state_dict)
if len(unexpected_keys) > 0:
logger.warning(
f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when "
f"initializing {model.__class__.__name__}: {unexpected_keys}\n"
f"- This IS expected if you are initializing {model.__class__.__name__} from the checkpoint of a model trained on another task "
f"or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\n"
f"- This IS NOT expected if you are initializing {model.__class__.__name__} from the checkpoint of a model that you expect "
f"to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model)."
)
else:
logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n")
if len(missing_keys) > 0:
logger.warning(
f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at {pretrained_model_name_or_path} "
f"and are newly initialized: {missing_keys}\n"
f"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference."
)
else:
logger.info(
f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at {pretrained_model_name_or_path}.\n"
f"If your task is similar to the task the model of the ckeckpoint was trained on, "
f"you can already use {model.__class__.__name__} for predictions without further training."
)
if len(error_msgs) > 0:
raise RuntimeError(
"Error(s) in loading state_dict for {}:\n\t{}".format(
model.__class__.__name__, "\n\t".join(error_msgs)
)
)
model.tie_weights() # make sure token embedding weights are still tied if needed
# Set model in evaluation mode to deactivate DropOut modules by default
model.eval()
if output_loading_info:
loading_info = {
"missing_keys": missing_keys,
"unexpected_keys": unexpected_keys,
"error_msgs": error_msgs,
}
return model, loading_info
if hasattr(config, "xla_device") and config.xla_device:
import torch_xla.core.xla_model as xm
model = xm.send_cpu_data_to_device(model, xm.xla_device())
model.to(xm.xla_device())
return model
class Conv1D(nn.Module):
def __init__(self, nf, nx):
""" Conv1D layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2)
Basically works like a Linear layer but the weights are transposed
"""
super().__init__()
self.nf = nf
w = torch.empty(nx, nf)
nn.init.normal_(w, std=0.02)
self.weight = nn.Parameter(w)
self.bias = nn.Parameter(torch.zeros(nf))
def forward(self, x):
size_out = x.size()[:-1] + (self.nf,)
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
x = x.view(*size_out)
return x
class PoolerStartLogits(nn.Module):
""" Compute SQuAD start_logits from sequence hidden states. """
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, 1)
def forward(self, hidden_states, p_mask=None):
""" Args:
**p_mask**: (`optional`) ``torch.FloatTensor`` of shape `(batch_size, seq_len)`
invalid position mask such as query and special symbols (PAD, SEP, CLS)
1.0 means token should be masked.
"""
x = self.dense(hidden_states).squeeze(-1)
if p_mask is not None:
if next(self.parameters()).dtype == torch.float16:
x = x * (1 - p_mask) - 65500 * p_mask
else:
x = x * (1 - p_mask) - 1e30 * p_mask
return x
class PoolerEndLogits(nn.Module):
""" Compute SQuAD end_logits from sequence hidden states and start token hidden state.
"""
def __init__(self, config):
super().__init__()
self.dense_0 = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.activation = nn.Tanh()
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dense_1 = nn.Linear(config.hidden_size, 1)
def forward(self, hidden_states, start_states=None, start_positions=None, p_mask=None):
""" Args:
One of ``start_states``, ``start_positions`` should be not None.
If both are set, ``start_positions`` overrides ``start_states``.
**start_states**: ``torch.LongTensor`` of shape identical to hidden_states
hidden states of the first tokens for the labeled span.
**start_positions**: ``torch.LongTensor`` of shape ``(batch_size,)``
position of the first token for the labeled span:
**p_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(batch_size, seq_len)``
Mask of invalid position such as query and special symbols (PAD, SEP, CLS)
1.0 means token should be masked.
"""
assert (
start_states is not None or start_positions is not None
), "One of start_states, start_positions should be not None"
if start_positions is not None:
slen, hsz = hidden_states.shape[-2:]
start_positions = start_positions[:, None, None].expand(-1, -1, hsz) # shape (bsz, 1, hsz)
start_states = hidden_states.gather(-2, start_positions) # shape (bsz, 1, hsz)
start_states = start_states.expand(-1, slen, -1) # shape (bsz, slen, hsz)
x = self.dense_0(torch.cat([hidden_states, start_states], dim=-1))
x = self.activation(x)
x = self.LayerNorm(x)
x = self.dense_1(x).squeeze(-1)
if p_mask is not None:
if next(self.parameters()).dtype == torch.float16:
x = x * (1 - p_mask) - 65500 * p_mask
else:
x = x * (1 - p_mask) - 1e30 * p_mask
return x
class PoolerAnswerClass(nn.Module):
""" Compute SQuAD 2.0 answer class from classification and start tokens hidden states. """
def __init__(self, config):
super().__init__()
self.dense_0 = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.activation = nn.Tanh()
self.dense_1 = nn.Linear(config.hidden_size, 1, bias=False)
def forward(self, hidden_states, start_states=None, start_positions=None, cls_index=None):
"""
Args:
One of ``start_states``, ``start_positions`` should be not None.
If both are set, ``start_positions`` overrides ``start_states``.
**start_states**: ``torch.LongTensor`` of shape identical to ``hidden_states``.
hidden states of the first tokens for the labeled span.
**start_positions**: ``torch.LongTensor`` of shape ``(batch_size,)``
position of the first token for the labeled span.
**cls_index**: torch.LongTensor of shape ``(batch_size,)``
position of the CLS token. If None, take the last token.
note(Original repo):
no dependency on end_feature so that we can obtain one single `cls_logits`
for each sample
"""
hsz = hidden_states.shape[-1]
assert (
start_states is not None or start_positions is not None
), "One of start_states, start_positions should be not None"
if start_positions is not None:
start_positions = start_positions[:, None, None].expand(-1, -1, hsz) # shape (bsz, 1, hsz)
start_states = hidden_states.gather(-2, start_positions).squeeze(-2) # shape (bsz, hsz)
if cls_index is not None:
cls_index = cls_index[:, None, None].expand(-1, -1, hsz) # shape (bsz, 1, hsz)
cls_token_state = hidden_states.gather(-2, cls_index).squeeze(-2) # shape (bsz, hsz)
else:
cls_token_state = hidden_states[:, -1, :] # shape (bsz, hsz)
x = self.dense_0(torch.cat([start_states, cls_token_state], dim=-1))
x = self.activation(x)
x = self.dense_1(x).squeeze(-1)
return x
class SQuADHead(nn.Module):
r""" A SQuAD head inspired by XLNet.
Parameters:
config (:class:`~transformers.XLNetConfig`): Model configuration class with all the parameters of the model.
Inputs:
**hidden_states**: ``torch.FloatTensor`` of shape ``(batch_size, seq_len, hidden_size)``
hidden states of sequence tokens
**start_positions**: ``torch.LongTensor`` of shape ``(batch_size,)``
position of the first token for the labeled span.
**end_positions**: ``torch.LongTensor`` of shape ``(batch_size,)``
position of the last token for the labeled span.
**cls_index**: torch.LongTensor of shape ``(batch_size,)``
position of the CLS token. If None, take the last token.
**is_impossible**: ``torch.LongTensor`` of shape ``(batch_size,)``
Whether the question has a possible answer in the paragraph or not.
**p_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(batch_size, seq_len)``
Mask of invalid position such as query and special symbols (PAD, SEP, CLS)
1.0 means token should be masked.
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**loss**: (`optional`, returned if both ``start_positions`` and ``end_positions`` are provided) ``torch.FloatTensor`` of shape ``(1,)``:
Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.
**start_top_log_probs**: (`optional`, returned if ``start_positions`` or ``end_positions`` is not provided)
``torch.FloatTensor`` of shape ``(batch_size, config.start_n_top)``
Log probabilities for the top config.start_n_top start token possibilities (beam-search).
**start_top_index**: (`optional`, returned if ``start_positions`` or ``end_positions`` is not provided)
``torch.LongTensor`` of shape ``(batch_size, config.start_n_top)``
Indices for the top config.start_n_top start token possibilities (beam-search).
**end_top_log_probs**: (`optional`, returned if ``start_positions`` or ``end_positions`` is not provided)
``torch.FloatTensor`` of shape ``(batch_size, config.start_n_top * config.end_n_top)``
Log probabilities for the top ``config.start_n_top * config.end_n_top`` end token possibilities (beam-search).
**end_top_index**: (`optional`, returned if ``start_positions`` or ``end_positions`` is not provided)
``torch.LongTensor`` of shape ``(batch_size, config.start_n_top * config.end_n_top)``
Indices for the top ``config.start_n_top * config.end_n_top`` end token possibilities (beam-search).
**cls_logits**: (`optional`, returned if ``start_positions`` or ``end_positions`` is not provided)
``torch.FloatTensor`` of shape ``(batch_size,)``
Log probabilities for the ``is_impossible`` label of the answers.
"""
def __init__(self, config):
super().__init__()
self.start_n_top = config.start_n_top
self.end_n_top = config.end_n_top
self.start_logits = PoolerStartLogits(config)
self.end_logits = PoolerEndLogits(config)
self.answer_class = PoolerAnswerClass(config)
def forward(
self, hidden_states, start_positions=None, end_positions=None, cls_index=None, is_impossible=None, p_mask=None,
):
outputs = ()
start_logits = self.start_logits(hidden_states, p_mask=p_mask)
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, let's remove the dimension added by batch splitting
for x in (start_positions, end_positions, cls_index, is_impossible):
if x is not None and x.dim() > 1:
x.squeeze_(-1)
# during training, compute the end logits based on the ground truth of the start position
end_logits = self.end_logits(hidden_states, start_positions=start_positions, p_mask=p_mask)
loss_fct = CrossEntropyLoss()
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
if cls_index is not None and is_impossible is not None:
# Predict answerability from the representation of CLS and START
cls_logits = self.answer_class(hidden_states, start_positions=start_positions, cls_index=cls_index)
loss_fct_cls = nn.BCEWithLogitsLoss()
cls_loss = loss_fct_cls(cls_logits, is_impossible)
# note(zhiliny): by default multiply the loss by 0.5 so that the scale is comparable to start_loss and end_loss
total_loss += cls_loss * 0.5
outputs = (total_loss,) + outputs
else:
# during inference, compute the end logits based on beam search
bsz, slen, hsz = hidden_states.size()
start_log_probs = F.softmax(start_logits, dim=-1) # shape (bsz, slen)
start_top_log_probs, start_top_index = torch.topk(
start_log_probs, self.start_n_top, dim=-1
) # shape (bsz, start_n_top)
start_top_index_exp = start_top_index.unsqueeze(-1).expand(-1, -1, hsz) # shape (bsz, start_n_top, hsz)
start_states = torch.gather(hidden_states, -2, start_top_index_exp) # shape (bsz, start_n_top, hsz)
start_states = start_states.unsqueeze(1).expand(-1, slen, -1, -1) # shape (bsz, slen, start_n_top, hsz)
hidden_states_expanded = hidden_states.unsqueeze(2).expand_as(
start_states
) # shape (bsz, slen, start_n_top, hsz)
p_mask = p_mask.unsqueeze(-1) if p_mask is not None else None
end_logits = self.end_logits(hidden_states_expanded, start_states=start_states, p_mask=p_mask)
end_log_probs = F.softmax(end_logits, dim=1) # shape (bsz, slen, start_n_top)
end_top_log_probs, end_top_index = torch.topk(
end_log_probs, self.end_n_top, dim=1
) # shape (bsz, end_n_top, start_n_top)
end_top_log_probs = end_top_log_probs.view(-1, self.start_n_top * self.end_n_top)
end_top_index = end_top_index.view(-1, self.start_n_top * self.end_n_top)
start_states = torch.einsum("blh,bl->bh", hidden_states, start_log_probs)
cls_logits = self.answer_class(hidden_states, start_states=start_states, cls_index=cls_index)
outputs = (start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits,) + outputs
# return start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits
# or (if labels are provided) (total_loss,)
return outputs
class SequenceSummary(nn.Module):
r""" Compute a single vector summary of a sequence hidden states according to various possibilities:
Args of the config class:
summary_type:
- 'last' => [default] take the last token hidden state (like XLNet)
- 'first' => take the first token hidden state (like Bert)
- 'mean' => take the mean of all tokens hidden states
- 'cls_index' => supply a Tensor of classification token position (GPT/GPT-2)
- 'attn' => Not implemented now, use multi-head attention
summary_use_proj: Add a projection after the vector extraction
summary_proj_to_labels: If True, the projection outputs to config.num_labels classes (otherwise to hidden_size). Default: False.
summary_activation: 'tanh' or another string => add an activation to the output, Other => no activation. Default
summary_first_dropout: Add a dropout before the projection and activation
summary_last_dropout: Add a dropout after the projection and activation
"""
def __init__(self, config: PretrainedConfig):
super().__init__()
self.summary_type = getattr(config, "summary_type", "last")
if self.summary_type == "attn":
# We should use a standard multi-head attention module with absolute positional embedding for that.
# Cf. https://github.com/zihangdai/xlnet/blob/master/modeling.py#L253-L276
# We can probably just use the multi-head attention module of PyTorch >=1.1.0
raise NotImplementedError
self.summary = Identity()
if hasattr(config, "summary_use_proj") and config.summary_use_proj:
if hasattr(config, "summary_proj_to_labels") and config.summary_proj_to_labels and config.num_labels > 0:
num_classes = config.num_labels
else:
num_classes = config.hidden_size
self.summary = nn.Linear(config.hidden_size, num_classes)
activation_string = getattr(config, "summary_activation", None)
self.activation: Callable = (get_activation(activation_string) if activation_string else Identity())
self.first_dropout = Identity()
if hasattr(config, "summary_first_dropout") and config.summary_first_dropout > 0:
self.first_dropout = nn.Dropout(config.summary_first_dropout)
self.last_dropout = Identity()
if hasattr(config, "summary_last_dropout") and config.summary_last_dropout > 0:
self.last_dropout = nn.Dropout(config.summary_last_dropout)
def forward(self, hidden_states, cls_index=None):
""" hidden_states: float Tensor in shape [bsz, ..., seq_len, hidden_size], the hidden-states of the last layer.
cls_index: [optional] position of the classification token if summary_type == 'cls_index',
shape (bsz,) or more generally (bsz, ...) where ... are optional leading dimensions of hidden_states.
if summary_type == 'cls_index' and cls_index is None:
we take the last token of the sequence as classification token
"""
if self.summary_type == "last":
output = hidden_states[:, -1]
elif self.summary_type == "first":
output = hidden_states[:, 0]
elif self.summary_type == "mean":
output = hidden_states.mean(dim=1)
elif self.summary_type == "cls_index":
if cls_index is None:
cls_index = torch.full_like(hidden_states[..., :1, :], hidden_states.shape[-2] - 1, dtype=torch.long,)
else:
cls_index = cls_index.unsqueeze(-1).unsqueeze(-1)
cls_index = cls_index.expand((-1,) * (cls_index.dim() - 1) + (hidden_states.size(-1),))
# shape of cls_index: (bsz, XX, 1, hidden_size) where XX are optional leading dim of hidden_states
output = hidden_states.gather(-2, cls_index).squeeze(-2) # shape (bsz, XX, hidden_size)
elif self.summary_type == "attn":
raise NotImplementedError
output = self.first_dropout(output)
output = self.summary(output)
output = self.activation(output)
output = self.last_dropout(output)
return output
def prune_linear_layer(layer, index, dim=0):
""" Prune a linear layer (a model parameters) to keep only entries in index.
Return the pruned layer as a new layer with requires_grad=True.
Used to remove heads.
"""
index = index.to(layer.weight.device)
W = layer.weight.index_select(dim, index).clone().detach()
if layer.bias is not None:
if dim == 1:
b = layer.bias.clone().detach()
else:
b = layer.bias[index].clone().detach()
new_size = list(layer.weight.size())
new_size[dim] = len(index)
new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device)
new_layer.weight.requires_grad = False
new_layer.weight.copy_(W.contiguous())
new_layer.weight.requires_grad = True
if layer.bias is not None:
new_layer.bias.requires_grad = False
new_layer.bias.copy_(b.contiguous())
new_layer.bias.requires_grad = True
return new_layer
def prune_conv1d_layer(layer, index, dim=1):
""" Prune a Conv1D layer (a model parameters) to keep only entries in index.
A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed.
Return the pruned layer as a new layer with requires_grad=True.
Used to remove heads.
"""
index = index.to(layer.weight.device)
W = layer.weight.index_select(dim, index).clone().detach()
if dim == 0:
b = layer.bias.clone().detach()
else:
b = layer.bias[index].clone().detach()
new_size = list(layer.weight.size())
new_size[dim] = len(index)
new_layer = Conv1D(new_size[1], new_size[0]).to(layer.weight.device)
new_layer.weight.requires_grad = False
new_layer.weight.copy_(W.contiguous())
new_layer.weight.requires_grad = True
new_layer.bias.requires_grad = False
new_layer.bias.copy_(b.contiguous())
new_layer.bias.requires_grad = True
return new_layer
def prune_layer(layer, index, dim=None):
""" Prune a Conv1D or nn.Linear layer (a model parameters) to keep only entries in index.
Return the pruned layer as a new layer with requires_grad=True.
Used to remove heads.
"""
if isinstance(layer, nn.Linear):
return prune_linear_layer(layer, index, dim=0 if dim is None else dim)
elif isinstance(layer, Conv1D):
return prune_conv1d_layer(layer, index, dim=1 if dim is None else dim)
else:
raise ValueError("Can't prune layer of class {}".format(layer.__class__))
def apply_chunking_to_forward(
chunk_size: int, chunk_dim: int, forward_fn: Callable[..., torch.Tensor], *input_tensors
) -> torch.Tensor:
"""
This function chunks the `input_tensors` into smaller input tensor parts of size `chunk_size` over the dimension `chunk_dim`.
It then applies a layer `forward_fn` to each chunk independently to save memory.
If the `forward_fn` is independent across the `chunk_dim` this function will yield the
same result as not applying it.
Args:
chunk_size: int - the chunk size of a chunked tensor. `num_chunks` = `len(input_tensors[0]) / chunk_size`
chunk_dim: int - the dimension over which the input_tensors should be chunked
forward_fn: fn - the forward fn of the model
input_tensors: tuple(torch.Tensor) - the input tensors of `forward_fn` which are chunked
Returns:
a Tensor with the same shape the foward_fn would have given if applied
Examples::
# rename the usual forward() fn to forward_chunk()
def forward_chunk(self, hidden_states):
hidden_states = self.decoder(hidden_states)
return hidden_states
# implement a chunked forward function
def forward(self, hidden_states):
return apply_chunking_to_forward(self.chunk_size_lm_head, self.seq_len_dim, self.forward_chunk, hidden_states)
"""
assert len(input_tensors) > 0, "{} has to be a tuple/list of tensors".format(input_tensors)
tensor_shape = input_tensors[0].shape
assert all(
input_tensor.shape == tensor_shape for input_tensor in input_tensors
), "All input tenors have to be of the same shape"
# inspect.signature exist since python 3.5 and is a python method -> no problem with backward compability
num_args_in_forward_chunk_fn = len(inspect.signature(forward_fn).parameters)
assert num_args_in_forward_chunk_fn == len(
input_tensors
), "forward_chunk_fn expects {} arguments, but only {} input tensors are given".format(
num_args_in_forward_chunk_fn, len(input_tensors)
)
if chunk_size > 0:
assert (
input_tensors[0].shape[chunk_dim] % chunk_size == 0
), "The dimension to be chunked {} has to be a multiple of the chunk size {}".format(
input_tensors[0].shape[chunk_dim], chunk_size
)
num_chunks = input_tensors[0].shape[chunk_dim] // chunk_size
# chunk input tensor into tuples
input_tensors_chunks = tuple(input_tensor.chunk(num_chunks, dim=chunk_dim) for input_tensor in input_tensors)
# apply forward fn to every tuple
output_chunks = tuple(forward_fn(*input_tensors_chunk) for input_tensors_chunk in zip(*input_tensors_chunks))
# concatenate output at same dimension
return torch.cat(output_chunks, dim=chunk_dim)
return forward_fn(*input_tensors)
================================================
FILE: bert/tokenization_bert.py
================================================
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tokenization classes."""
import collections
import logging
import os
import unicodedata
from typing import List, Optional
from .tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
logger = logging.getLogger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt"}
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt",
"bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt",
"bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt",
"bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt",
"bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt",
"bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt",
"bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt",
"bert-base-german-cased": "https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt",
"bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt",
"bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt",
"bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt",
"bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt",
"bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt",
"bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt",
"bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt",
"TurkuNLP/bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/vocab.txt",
"TurkuNLP/bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/vocab.txt",
"wietsedv/bert-base-dutch-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/vocab.txt",
}
}
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"bert-base-uncased": 512,
"bert-large-uncased": 512,
"bert-base-cased": 512,
"bert-large-cased": 512,
"bert-base-multilingual-uncased": 512,
"bert-base-multilingual-cased": 512,
"bert-base-chinese": 512,
"bert-base-german-cased": 512,
"bert-large-uncased-whole-word-masking": 512,
"bert-large-cased-whole-word-masking": 512,
"bert-large-uncased-whole-word-masking-finetuned-squad": 512,
"bert-large-cased-whole-word-masking-finetuned-squad": 512,
"bert-base-cased-finetuned-mrpc": 512,
"bert-base-german-dbmdz-cased": 512,
"bert-base-german-dbmdz-uncased": 512,
"TurkuNLP/bert-base-finnish-cased-v1": 512,
"TurkuNLP/bert-base-finnish-uncased-v1": 512,
"wietsedv/bert-base-dutch-cased": 512,
}
PRETRAINED_INIT_CONFIGURATION = {
"bert-base-uncased": {"do_lower_case": True},
"bert-large-uncased": {"do_lower_case": True},
"bert-base-cased": {"do_lower_case": False},
"bert-large-cased": {"do_lower_case": False},
"bert-base-multilingual-uncased": {"do_lower_case": True},
"bert-base-multilingual-cased": {"do_lower_case": False},
"bert-base-chinese": {"do_lower_case": False},
"bert-base-german-cased": {"do_lower_case": False},
"bert-large-uncased-whole-word-masking": {"do_lower_case": True},
"bert-large-cased-whole-word-masking": {"do_lower_case": False},
"bert-large-uncased-whole-word-masking-finetuned-squad": {"do_lower_case": True},
"bert-large-cased-whole-word-masking-finetuned-squad": {"do_lower_case": False},
"bert-base-cased-finetuned-mrpc": {"do_lower_case": False},
"bert-base-german-dbmdz-cased": {"do_lower_case": False},
"bert-base-german-dbmdz-uncased": {"do_lower_case": True},
"TurkuNLP/bert-base-finnish-cased-v1": {"do_lower_case": False},
"TurkuNLP/bert-base-finnish-uncased-v1": {"do_lower_case": True},
"wietsedv/bert-base-dutch-cased": {"do_lower_case": False},
}
def load_vocab(vocab_file):
"""Loads a vocabulary file into a dictionary."""
vocab = collections.OrderedDict()
with open(vocab_file, "r", encoding="utf-8") as reader:
tokens = reader.readlines()
for index, token in enumerate(tokens):
token = token.rstrip("\n")
vocab[token] = index
return vocab
def whitespace_tokenize(text):
"""Runs basic whitespace cleaning and splitting on a piece of text."""
text = text.strip()
if not text:
return []
tokens = text.split()
return tokens
class BertTokenizer(PreTrainedTokenizer):
r"""
Constructs a BERT tokenizer. Based on WordPiece.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the methods. Users
should refer to the superclass for more information regarding methods.
Args:
vocab_file (:obj:`string`):
File containing the vocabulary.
do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether to lowercase the input when tokenizing.
do_basic_tokenize (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether to do basic tokenization before WordPiece.
never_split (:obj:`Iterable`, `optional`, defaults to :obj:`None`):
Collection of tokens which will never be split during tokenization. Only has an effect when
:obj:`do_basic_tokenize=True`
unk_token (:obj:`string`, `optional`, defaults to "[UNK]"):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (:obj:`string`, `optional`, defaults to "[SEP]"):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences
for sequence classification or for a text and a question for question answering.
It is also used as the last token of a sequence built with special tokens.
pad_token (:obj:`string`, `optional`, defaults to "[PAD]"):
The token used for padding, for example when batching sequences of different lengths.
cls_token (:obj:`string`, `optional`, defaults to "[CLS]"):
The classifier token which is used when doing sequence classification (classification of the whole
sequence instead of per-token classification). It is the first token of the sequence when built with
special tokens.
mask_token (:obj:`string`, `optional`, defaults to "[MASK]"):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether to tokenize Chinese characters.
This should likely be deactivated for Japanese:
see: https://github.com/huggingface/transformers/issues/328
"""
vocab_files_names = VOCAB_FILES_NAMES
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
def __init__(
self,
vocab_file,
do_lower_case=True,
do_basic_tokenize=True,
never_split=None,
unk_token="[UNK]",
sep_token="[SEP]",
pad_token="[PAD]",
cls_token="[CLS]",
mask_token="[MASK]",
tokenize_chinese_chars=True,
**kwargs
):
super().__init__(
unk_token=unk_token,
sep_token=sep_token,
pad_token=pad_token,
cls_token=cls_token,
mask_token=mask_token,
**kwargs,
)
if not os.path.isfile(vocab_file):
raise ValueError(
"Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained "
"model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)
)
self.vocab = load_vocab(vocab_file)
self.ids_to_tokens = collections.OrderedDict([(ids, tok) for tok, ids in self.vocab.items()])
self.do_basic_tokenize = do_basic_tokenize
if do_basic_tokenize:
self.basic_tokenizer = BasicTokenizer(
do_lower_case=do_lower_case, never_split=never_split, tokenize_chinese_chars=tokenize_chinese_chars
)
self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, unk_token=self.unk_token)
@property
def vocab_size(self):
return len(self.vocab)
def get_vocab(self):
return dict(self.vocab, **self.added_tokens_encoder)
def _tokenize(self, text):
split_tokens = []
if self.do_basic_tokenize:
for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens):
# If the token is part of the never_split set
if token in self.basic_tokenizer.never_split:
split_tokens.append(token)
else:
split_tokens += self.wordpiece_tokenizer.tokenize(token)
else:
split_tokens = self.wordpiece_tokenizer.tokenize(text)
return split_tokens
def _convert_token_to_id(self, token):
""" Converts a token (str) in an id using the vocab. """
return self.vocab.get(token, self.vocab.get(self.unk_token))
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
return self.ids_to_tokens.get(index, self.unk_token)
def convert_tokens_to_string(self, tokens):
""" Converts a sequence of tokens (string) in a single string. """
out_string = " ".join(tokens).replace(" ##", "").strip()
return out_string
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks
by concatenating and adding special tokens.
A BERT sequence has the following format:
- single sequence: ``[CLS] X [SEP]``
- pair of sequences: ``[CLS] A [SEP] B [SEP]``
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added
token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
Optional second list of IDs for sequence pairs.
Returns:
:obj:`List[int]`: list of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens.
"""
if token_ids_1 is None:
return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
cls = [self.cls_token_id]
sep = [self.sep_token_id]
return cls + token_ids_0 + sep + token_ids_1 + sep
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer ``prepare_for_model`` method.
Args:
token_ids_0 (:obj:`List[int]`):
List of ids.
token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True if the token list is already formatted with special tokens for the model
Returns:
:obj:`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
if already_has_special_tokens:
if token_ids_1 is not None:
raise ValueError(
"You should not supply a second sequence if the provided sequence of "
"ids is already formated with special tokens for the model."
)
return list(map(lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0))
if token_ids_1 is not None:
return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1]
return [1] + ([0] * len(token_ids_0)) + [1]
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Creates a mask from the two sequences passed to be used in a sequence-pair classification task.
A BERT sequence pair mask has the following format:
::
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0's).
Args:
token_ids_0 (:obj:`List[int]`):
List of ids.
token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
Optional second list of IDs for sequence pairs.
Returns:
:obj:`List[int]`: List of `token type IDs <../glossary.html#token-type-ids>`_ according to the given
sequence(s).
"""
sep = [self.sep_token_id]
cls = [self.cls_token_id]
if token_ids_1 is None:
return len(cls + token_ids_0 + sep) * [0]
return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]
def save_vocabulary(self, vocab_path):
"""
Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.
Args:
vocab_path (:obj:`str`):
The directory in which to save the vocabulary.
Returns:
:obj:`Tuple(str)`: Paths to the files saved.
"""
index = 0
if os.path.isdir(vocab_path):
vocab_file = os.path.join(vocab_path, VOCAB_FILES_NAMES["vocab_file"])
else:
vocab_file = vocab_path
with open(vocab_file, "w", encoding="utf-8") as writer:
for token, token_index in sorted(self.vocab.items(), key=lambda kv: kv[1]):
if index != token_index:
logger.warning(
"Saving vocabulary to {}: vocabulary indices are not consecutive."
" Please check that the vocabulary is not corrupted!".format(vocab_file)
)
index = token_index
writer.write(token + "\n")
index += 1
return (vocab_file,)
class BasicTokenizer(object):
"""Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
def __init__(self, do_lower_case=True, never_split=None, tokenize_chinese_chars=True):
""" Constructs a BasicTokenizer.
Args:
**do_lower_case**: Whether to lower case the input.
**never_split**: (`optional`) list of str
Kept for backward compatibility purposes.
Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
List of token not to split.
**tokenize_chinese_chars**: (`optional`) boolean (default True)
Whether to tokenize Chinese characters.
This should likely be deactivated for Japanese:
see: https://github.com/huggingface/pytorch-pretrained-BERT/issues/328
"""
if never_split is None:
never_split = []
self.do_lower_case = do_lower_case
self.never_split = set(never_split)
self.tokenize_chinese_chars = tokenize_chinese_chars
def tokenize(self, text, never_split=None):
""" Basic Tokenization of a piece of text.
Split on "white spaces" only, for sub-word tokenization, see WordPieceTokenizer.
Args:
**never_split**: (`optional`) list of str
Kept for backward compatibility purposes.
Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
List of token not to split.
"""
# union() returns a new set by concatenating the two sets.
never_split = self.never_split.union(set(never_split)) if never_split else self.never_split
# This was added on November 1st, 2018 for the multilingual and Chinese
# models. This is also applied to the English models now, but it doesn't
# matter since the English models were not trained on any Chinese data
# and generally don't have any Chinese data in them (there are Chinese
# characters in the vocabulary because Wikipedia does have some Chinese
# words in the English Wikipedia.).
if self.tokenize_chinese_chars:
text = self._tokenize_chinese_chars(text)
orig_tokens = whitespace_tokenize(text)
split_tokens = []
for token in orig_tokens:
if self.do_lower_case and token not in never_split:
token = token.lower()
token = self._run_strip_accents(token)
split_tokens.extend(self._run_split_on_punc(token, never_split))
output_tokens = whitespace_tokenize(" ".join(split_tokens))
return output_tokens
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
return "".join(output)
def _run_split_on_punc(self, text, never_split=None):
"""Splits punctuation on a piece of text."""
if never_split is not None and text in never_split:
return [text]
chars = list(text)
i = 0
start_new_word = True
output = []
while i < len(chars):
char = chars[i]
if _is_punctuation(char):
output.append([char])
start_new_word = True
else:
if start_new_word:
output.append([])
start_new_word = False
output[-1].append(char)
i += 1
return ["".join(x) for x in output]
def _tokenize_chinese_chars(self, text):
"""Adds whitespace around any CJK character."""
output = []
for char in text:
cp = ord(char)
if self._is_chinese_char(cp):
output.append(" ")
output.append(char)
output.append(" ")
else:
output.append(char)
return "".join(output)
def _is_chinese_char(self, cp):
"""Checks whether CP is the codepoint of a CJK character."""
# This defines a "chinese character" as anything in the CJK Unicode block:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
#
# Note that the CJK Unicode block is NOT all Japanese and Korean characters,
# despite its name. The modern Korean Hangul alphabet is a different block,
# as is Japanese Hiragana and Katakana. Those alphabets are used to write
# space-separated words, so they are not treated specially and handled
# like the all of the other languages.
if (
(cp >= 0x4E00 and cp <= 0x9FFF)
or (cp >= 0x3400 and cp <= 0x4DBF) #
or (cp >= 0x20000 and cp <= 0x2A6DF) #
or (cp >= 0x2A700 and cp <= 0x2B73F) #
or (cp >= 0x2B740 and cp <= 0x2B81F) #
or (cp >= 0x2B820 and cp <= 0x2CEAF) #
or (cp >= 0xF900 and cp <= 0xFAFF)
or (cp >= 0x2F800 and cp <= 0x2FA1F) #
): #
return True
return False
def _clean_text(self, text):
"""Performs invalid character removal and whitespace cleanup on text."""
output = []
for char in text:
cp = ord(char)
if cp == 0 or cp == 0xFFFD or _is_control(char):
continue
if _is_whitespace(char):
output.append(" ")
else:
output.append(char)
return "".join(output)
class WordpieceTokenizer(object):
"""Runs WordPiece tokenization."""
def __init__(self, vocab, unk_token, max_input_chars_per_word=100):
self.vocab = vocab
self.unk_token = unk_token
self.max_input_chars_per_word = max_input_chars_per_word
def tokenize(self, text):
"""Tokenizes a piece of text into its word pieces.
This uses a greedy longest-match-first algorithm to perform tokenization
using the given vocabulary.
For example:
input = "unaffable"
output = ["un", "##aff", "##able"]
Args:
text: A single token or whitespace separated tokens. This should have
already been passed through `BasicTokenizer`.
Returns:
A list of wordpiece tokens.
"""
output_tokens = []
for token in whitespace_tokenize(text):
chars = list(token)
if len(chars) > self.max_input_chars_per_word:
output_tokens.append(self.unk_token)
continue
is_bad = False
start = 0
sub_tokens = []
while start < len(chars):
end = len(chars)
cur_substr = None
while start < end:
substr = "".join(chars[start:end])
if start > 0:
substr = "##" + substr
if substr in self.vocab:
cur_substr = substr
break
end -= 1
if cur_substr is None:
is_bad = True
break
sub_tokens.append(cur_substr)
start = end
if is_bad:
output_tokens.append(self.unk_token)
else:
output_tokens.extend(sub_tokens)
return output_tokens
================================================
FILE: bert/tokenization_utils.py
================================================
# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Tokenization classes for python tokenizers.
For fast tokenizers (provided by HuggingFace's tokenizers library) see tokenization_utils_fast.py
"""
import itertools
import logging
import re
import unicodedata
from typing import Dict, List, Optional, Tuple, Union
from .file_utils import add_end_docstrings
from .tokenization_utils_base import (
ENCODE_KWARGS_DOCSTRING,
ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING,
AddedToken,
BatchEncoding,
EncodedInput,
EncodedInputPair,
PaddingStrategy,
PreTokenizedInput,
PreTokenizedInputPair,
PreTrainedTokenizerBase,
TensorType,
TextInput,
TextInputPair,
TruncationStrategy,
)
logger = logging.getLogger(__name__)
def _is_whitespace(char):
"""Checks whether `chars` is a whitespace character."""
# \t, \n, and \r are technically contorl characters but we treat them
# as whitespace since they are generally considered as such.
if char == " " or char == "\t" or char == "\n" or char == "\r":
return True
cat = unicodedata.category(char)
if cat == "Zs":
return True
return False
def _is_control(char):
"""Checks whether `chars` is a control character."""
# These are technically control characters but we count them as whitespace
# characters.
if char == "\t" or char == "\n" or char == "\r":
return False
cat = unicodedata.category(char)
if cat.startswith("C"):
return True
return False
def _is_punctuation(char):
"""Checks whether `chars` is a punctuation character."""
cp = ord(char)
# We treat all non-letter/number ASCII as punctuation.
# Characters such as "^", "$", and "`" are not in the Unicode
# Punctuation class but we treat them as punctuation anyways, for
# consistency.
if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126):
return True
cat = unicodedata.category(char)
if cat.startswith("P"):
return True
return False
def _is_end_of_word(text):
"""Checks whether the last character in text is one of a punctuation, control or whitespace character."""
last_char = text[-1]
return bool(_is_control(last_char) | _is_punctuation(last_char) | _is_whitespace(last_char))
def _is_start_of_word(text):
"""Checks whether the first character in text is one of a punctuation, control or whitespace character."""
first_char = text[0]
return bool(_is_control(first_char) | _is_punctuation(first_char) | _is_whitespace(first_char))
class PreTrainedTokenizer(PreTrainedTokenizerBase):
""" Base class for all slow tokenizers.
Handle all the shared methods for tokenization and special tokens as well as methods
downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don't
have to handle the specific vocabulary augmentation methods of the various underlying
dictionary structures (BPE, sentencepiece...).
Class attributes (overridden by derived classes):
- ``vocab_files_names``: a python ``dict`` with, as keys, the ``__init__`` keyword name of each vocabulary file
required by the model, and as associated values, the filename for saving the associated file (string).
- ``pretrained_vocab_files_map``: a python ``dict of dict`` the high-level keys
being the ``__init__`` keyword name of each vocabulary file required by the model, the low-level being the
`short-cut-names` (string) of the pretrained models with, as associated values, the `url` (string) to the
associated pretrained vocabulary file.
- ``max_model_input_sizes``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the pretrained
models, and as associated values, the maximum length of the sequence inputs of this model, or None if the
model has no maximum input size.
- ``pretrained_init_configuration``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the
pretrained models, and as associated values, a dictionnary of specific arguments to pass to the
``__init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the
``from_pretrained()`` method.
Args:
- ``model_max_length``: (`Optional`) int: the maximum length in number of tokens for the inputs to the transformer model.
When the tokenizer is loaded with `from_pretrained`, this will be set to the value stored for the associated
model in ``max_model_input_sizes`` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`).
no associated max_length can be found in ``max_model_input_sizes``.
- ``padding_side``: (`Optional`) string: the side on which the model should have padding applied.
Should be selected between ['right', 'left']
- ``model_input_names``: (`Optional`) List[string]: the list of the forward pass inputs accepted by the
model ("token_type_ids", "attention_mask"...).
- ``bos_token``: (`Optional`) string: a beginning of sentence token.
Will be associated to ``self.bos_token`` and ``self.bos_token_id``
- ``eos_token``: (`Optional`) string: an end of sentence token.
Will be associated to ``self.eos_token`` and ``self.eos_token_id``
- ``unk_token``: (`Optional`) string: an unknown token.
Will be associated to ``self.unk_token`` and ``self.unk_token_id``
- ``sep_token``: (`Optional`) string: a separation token (e.g. to separate context and query in an input sequence).
Will be associated to ``self.sep_token`` and ``self.sep_token_id``
- ``pad_token``: (`Optional`) string: a padding token.
Will be associated to ``self.pad_token`` and ``self.pad_token_id``
- ``cls_token``: (`Optional`) string: a classification token (e.g. to extract a summary of an input sequence
leveraging self-attention along the full depth of the model).
Will be associated to ``self.cls_token`` and ``self.cls_token_id``
- ``mask_token``: (`Optional`) string: a masking token (e.g. when training a model with masked-language
modeling). Will be associated to ``self.mask_token`` and ``self.mask_token_id``
- ``additional_special_tokens``: (`Optional`) list: a list of additional special tokens.
Adding all special tokens here ensure they won't be split by the tokenization process.
Will be associated to ``self.additional_special_tokens`` and ``self.additional_special_tokens_ids``
.. automethod:: __call__
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Added tokens - We store this for both slow and fast tokenizers
# until the serialization of Fast tokenizers is updated
self.added_tokens_encoder: Dict[str, int] = {}
self.added_tokens_decoder: Dict[int, str] = {}
self.unique_no_split_tokens: List[str] = []
@property
def is_fast(self) -> bool:
return False
@property
def vocab_size(self) -> int:
""" Size of the base vocabulary (without the added tokens) """
raise NotImplementedError
def get_vocab(self):
""" Returns the vocabulary as a dict of {token: index} pairs. `tokenizer.get_vocab()[token]` is equivalent to `tokenizer.convert_tokens_to_ids(token)` when `token` is in the vocab. """
raise NotImplementedError()
def get_added_vocab(self) -> Dict[str, int]:
return self.added_tokens_encoder
def __len__(self):
""" Size of the full vocabulary with the added tokens """
return self.vocab_size + len(self.added_tokens_encoder)
def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens=False) -> int:
"""
Add a list of new tokens to the tokenizer class. If the new tokens are not in the
vocabulary, they are added to it with indices starting from length of the current vocabulary.
Args:
new_tokens: string or list of string. Each string is a token to add. Tokens are only added if they are not
already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them).
Returns:
Number of tokens added to the vocabulary.
Examples::
# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
"""
new_tokens = [str(tok) for tok in new_tokens]
tokens_to_add = []
for token in new_tokens:
assert isinstance(token, str)
if not special_tokens and self.init_kwargs.get("do_lower_case", False):
token = token.lower()
if (
token != self.unk_token
and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)
and token not in tokens_to_add
):
tokens_to_add.append(token)
if self.verbose:
logger.info("Adding %s to the vocabulary", token)
added_tok_encoder = dict((tok, len(self) + i) for i, tok in enumerate(tokens_to_add))
added_tok_decoder = {v: k for k, v in added_tok_encoder.items()}
self.added_tokens_encoder.update(added_tok_encoder)
self.added_tokens_decoder.update(added_tok_decoder)
# Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert)
if special_tokens:
self.unique_no_split_tokens = list(set(self.unique_no_split_tokens).union(set(new_tokens)))
else:
# Or on the newly added tokens
self.unique_no_split_tokens = list(set(self.unique_no_split_tokens).union(set(tokens_to_add)))
return len(tokens_to_add)
def num_special_tokens_to_add(self, pair=False):
"""
Returns the number of added tokens when encoding a sequence with special tokens.
Note:
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this
inside your training loop.
Args:
pair: Returns the number of added tokens in the case of a sequence pair if set to True, returns the
number of added tokens in the case of a single sequence if set to False.
Returns:
Number of tokens added to sequences
"""
token_ids_0 = []
token_ids_1 = []
return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None))
def tokenize(self, text: TextInput, **kwargs):
""" Converts a string in a sequence of tokens (string), using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based
vocabularies (BPE/SentencePieces/WordPieces).
Take care of added tokens.
Args:
text (:obj:`string`): The sequence to be encoded.
**kwargs (:obj: `dict`): Arguments passed to the model-specific `prepare_for_tokenization` preprocessing method.
"""
# Simple mapping string => AddedToken for special tokens with specific tokenization behaviors
all_special_tokens_extended = dict(
(str(t), t) for t in self.all_special_tokens_extended if isinstance(t, AddedToken)
)
text, kwargs = self.prepare_for_tokenization(text, **kwargs)
if kwargs:
logger.warning(f"Keyword arguments {kwargs} not recognized.")
# TODO: should this be in the base class?
if self.init_kwargs.get("do_lower_case", False):
# convert non-special tokens to lowercase
escaped_special_toks = [re.escape(s_tok) for s_tok in self.all_special_tokens]
pattern = r"(" + r"|".join(escaped_special_toks) + r")|" + r"(.+?)"
text = re.sub(pattern, lambda m: m.groups()[0] or m.groups()[1].lower(), text)
def split_on_token(tok, text):
result = []
tok_extended = all_special_tokens_extended.get(tok, None)
split_text = text.split(tok)
full_word = ""
for i, sub_text in enumerate(split_text):
# AddedToken can control whitespace stripping around them.
# We use them for GPT2 and Roberta to have different behavior depending on the special token
# Cf. https://github.com/huggingface/transformers/pull/2778
# and https://github.com/huggingface/transformers/issues/3788
if isinstance(tok_extended, AddedToken):
if tok_extended.single_word:
# Try to avoid splitting on token
if (
i < len(split_text) - 1
and not _is_end_of_word(sub_text)
and not _is_start_of_word(split_text[i + 1])
):
# Don't extract the special token
full_word += sub_text + tok
elif full_word:
full_word += sub_text
result += [full_word]
full_word = ""
continue
# Strip white spaces on the right
if tok_extended.rstrip and i > 0:
# A bit counter-intuitive but we strip the left of the string
# since tok_extended.rstrip means the special token is eating all white spaces on its right
sub_text = sub_text.lstrip()
# Strip white spaces on the left
if tok_extended.lstrip and i < len(split_text) - 1:
sub_text = sub_text.rstrip() # Opposite here
else:
# We strip left and right by default
if i < len(split_text) - 1:
sub_text = sub_text.rstrip()
if i > 0:
sub_text = sub_text.lstrip()
if i == 0 and not sub_text:
result += [tok]
elif i == len(split_text) - 1:
if sub_text:
result += [sub_text]
else:
pass
else:
if sub_text:
result += [sub_text]
result += [tok]
return result
def split_on_tokens(tok_list, text):
if not text.strip():
return []
if not tok_list:
return self._tokenize(text)
tokenized_text = []
text_list = [text]
for tok in tok_list:
tokenized_text = []
for sub_text in text_list:
if sub_text not in self.unique_no_split_tokens:
tokenized_text += split_on_token(tok, sub_text)
else:
tokenized_text += [sub_text]
text_list = tokenized_text
return list(
itertools.chain.from_iterable(
(
self._tokenize(token) if token not in self.unique_no_split_tokens else [token]
for token in tokenized_text
)
)
)
no_split_token = self.unique_no_split_tokens
tokenized_text = split_on_tokens(no_split_token, text)
return tokenized_text
def _tokenize(self, text, **kwargs):
""" Converts a string in a sequence of tokens (string), using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based
vocabularies (BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens.
"""
raise NotImplementedError
def convert_tokens_to_ids(self, tokens):
""" Converts a token string (or a sequence of tokens) in a single integer id
(or a sequence of ids), using the vocabulary.
"""
if tokens is None:
return None
if isinstance(tokens, str):
return self._convert_token_to_id_with_added_voc(tokens)
ids = []
for token in tokens:
ids.append(self._convert_token_to_id_with_added_voc(token))
return ids
def _convert_token_to_id_with_added_voc(self, token):
if token is None:
return None
if token in self.added_tokens_encoder:
return self.added_tokens_encoder[token]
return self._convert_token_to_id(token)
def _convert_token_to_id(self, token):
raise NotImplementedError
def _encode_plus(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
def get_input_ids(text):
if isinstance(text, str):
tokens = self.tokenize(text, **kwargs)
return self.convert_tokens_to_ids(tokens)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):
if is_pretokenized:
tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
else:
return self.convert_tokens_to_ids(text)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int):
return text
else:
if is_pretokenized:
raise ValueError(
f"Input {text} is not valid. Should be a string or a list/tuple of strings when `is_pretokenized=True`."
)
else:
raise ValueError(
f"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
)
if return_offsets_mapping:
raise NotImplementedError(
"return_offset_mapping is not available when using Python tokenizers."
"To use this feature, change your tokenizer to one deriving from "
"transformers.PreTrainedTokenizerFast."
"More information on available tokenizers at "
"https://github.com/huggingface/transformers/pull/2674"
)
first_ids = get_input_ids(text)
second_ids = get_input_ids(text_pair) if text_pair is not None else None
return self.prepare_for_model(
first_ids,
pair_ids=second_ids,
add_special_tokens=add_special_tokens,
padding=padding_strategy.value,
truncation=truncation_strategy.value,
max_length=max_length,
stride=stride,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors=return_tensors,
prepend_batch_axis=True,
return_attention_mask=return_attention_mask,
return_token_type_ids=return_token_type_ids,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_length=return_length,
verbose=verbose,
)
def _batch_encode_plus(
self,
batch_text_or_text_pairs: Union[
List[TextInput],
List[TextInputPair],
List[PreTokenizedInput],
List[PreTokenizedInputPair],
List[EncodedInput],
List[EncodedInputPair],
],
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
def get_input_ids(text):
if isinstance(text, str):
tokens = self.tokenize(text, **kwargs)
return self.convert_tokens_to_ids(tokens)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):
if is_pretokenized:
tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
else:
return self.convert_tokens_to_ids(text)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int):
return text
else:
raise ValueError(
"Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
)
if return_offsets_mapping:
raise NotImplementedError(
"return_offset_mapping is not available when using Python tokenizers."
"To use this feature, change your tokenizer to one deriving from "
"transformers.PreTrainedTokenizerFast."
)
input_ids = []
for ids_or_pair_ids in batch_text_or_text_pairs:
if not isinstance(ids_or_pair_ids, (list, tuple)):
ids, pair_ids = ids_or_pair_ids, None
elif is_pretokenized and not isinstance(ids_or_pair_ids[0], (list, tuple)):
ids, pair_ids = ids_or_pair_ids, None
else:
ids, pair_ids = ids_or_pair_ids
first_ids = get_input_ids(ids)
second_ids = get_input_ids(pair_ids) if pair_ids is not None else None
input_ids.append((first_ids, second_ids))
batch_outputs = self._batch_prepare_for_model(
input_ids,
add_special_tokens=add_special_tokens,
padding_strategy=padding_strategy,
truncation_strategy=truncation_strategy,
max_length=max_length,
stride=stride,
pad_to_multiple_of=pad_to_multiple_of,
return_attention_mask=return_attention_mask,
return_token_type_ids=return_token_type_ids,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_length=return_length,
return_tensors=return_tensors,
verbose=verbose,
)
return BatchEncoding(batch_outputs)
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def _batch_prepare_for_model(
self,
batch_ids_pairs: List[Union[PreTokenizedInputPair, Tuple[List[int], None]]],
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
max_length: Optional[int] = None,
stride: int = 0,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[str] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_length: bool = False,
verbose: bool = True,
) -> BatchEncoding:
""" Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model.
It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens
Args:
batch_ids_pairs: list of tokenized input ids or input ids pairs
"""
batch_outputs = {}
for first_ids, second_ids in batch_ids_pairs:
outputs = self.prepare_for_model(
first_ids,
second_ids,
add_special_tokens=add_special_tokens,
padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterward
truncation=truncation_strategy.value,
max_length=max_length,
stride=stride,
pad_to_multiple_of=None, # we pad in batch afterward
return_attention_mask=False, # we pad in batch afterward
return_token_type_ids=return_token_type_ids,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_length=return_length,
return_tensors=None, # We convert the whole batch to tensors at the end
prepend_batch_axis=False,
verbose=verbose,
)
for key, value in outputs.items():
if key not in batch_outputs:
batch_outputs[key] = []
batch_outputs[key].append(value)
batch_outputs = self.pad(
batch_outputs,
padding=padding_strategy.value,
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_attention_mask=return_attention_mask,
)
batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors)
return batch_outputs
def prepare_for_tokenization(self, text: str, is_pretokenized=False, **kwargs) -> (str, dict):
""" Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return kwargs as well.
We test kwargs at the end of the encoding process to be sure all the arguments have been used.
"""
return (text, kwargs)
def get_special_tokens_mask(
self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer ``prepare_for_model`` method.
Args:
token_ids_0: list of ids (must not contain special tokens)
token_ids_1: Optional list of ids (must not contain special tokens), necessary when fetching sequence ids
for sequence pairs
already_has_special_tokens: (default False) Set to True if the token list is already formated with
special tokens for the model
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0))
def convert_ids_to_tokens(
self, ids: Union[int, List[int]], skip_special_tokens: bool = False
) -> Union[str, List[str]]:
""" Converts a single index or a sequence of indices (integers) in a token "
(resp.) a sequence of tokens (str), using the vocabulary and added tokens.
Args:
skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False
"""
if isinstance(ids, int):
if ids in self.added_tokens_decoder:
return self.added_tokens_decoder[ids]
else:
return self._convert_id_to_token(ids)
tokens = []
for index in ids:
index = int(index)
if skip_special_tokens and index in self.all_special_ids:
continue
if index in self.added_tokens_decoder:
tokens.append(self.added_tokens_decoder[index])
else:
tokens.append(self._convert_id_to_token(index))
return tokens
def _convert_id_to_token(self, index: int) -> str:
raise NotImplementedError
def convert_tokens_to_string(self, tokens: List[str]) -> str:
""" Converts a sequence of tokens (string) in a single string.
The most simple way to do it is ' '.join(self.convert_ids_to_tokens(token_ids))
but we often want to remove sub-word tokenization artifacts at the same time.
"""
return " ".join(self.convert_ids_to_tokens(tokens))
def decode(
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str:
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
# To avoid mixing byte-level and unicode for byte-level BPT
# we need to build string separatly for added tokens and byte-level tokens
# cf. https://github.com/huggingface/transformers/issues/1133
sub_texts = []
current_sub_text = []
for token in filtered_tokens:
if skip_special_tokens and token in self.all_special_ids:
continue
if token in self.added_tokens_encoder:
if current_sub_text:
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
current_sub_text = []
sub_texts.append(token)
else:
current_sub_text.append(token)
if current_sub_text:
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
text = " ".join(sub_texts)
if clean_up_tokenization_spaces:
clean_text = self.clean_up_tokenization(text)
return clean_text
else:
return text
def save_vocabulary(self, save_directory) -> Tuple[str]:
""" Save the tokenizer vocabulary to a directory. This method does *NOT* save added tokens
and special token mappings.
Please use :func:`~transformers.PreTrainedTokenizer.save_pretrained` `()` to save the full
Tokenizer state if you want to reload it using the :func:`~transformers.PreTrainedTokenizer.from_pretrained`
class method.
"""
raise NotImplementedError
================================================
FILE: bert/tokenization_utils_base.py
================================================
# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Base classes common to both the slow and the fast tokenization classes:
PreTrainedTokenizerBase (host all the user fronting encoding methodes)
Special token mixing (host the special tokens logic) and
BatchEncoding (wrap the dictionnary of output with special method for the Fast tokenizers)
"""
import copy
import json
import logging
import os
import warnings
from collections import UserDict
from enum import Enum
from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union
import numpy as np
from tokenizers import AddedToken
from tokenizers import Encoding as EncodingFast
from .file_utils import (
add_end_docstrings,
cached_path,
hf_bucket_url,
is_remote_url,
is_tf_available,
is_torch_available,
torch_required,
)
if is_tf_available():
import tensorflow as tf
if is_torch_available():
import torch
logger = logging.getLogger(__name__)
VERY_LARGE_INTEGER = int(1e30) # This is used to set the max input length for a model with infinite size input
LARGE_INTEGER = int(1e20) # This is used when we need something big but slightly smaller than VERY_LARGE_INTEGER
# Define type aliases and NamedTuples
TextInput = str
PreTokenizedInput = List[str]
EncodedInput = List[int]
TextInputPair = Tuple[str, str]
PreTokenizedInputPair = Tuple[List[str], List[str]]
EncodedInputPair = Tuple[List[int], List[int]]
# Slow tokenizers used to be saved in three separated files
SPECIAL_TOKENS_MAP_FILE = "special_tokens_map.json"
ADDED_TOKENS_FILE = "added_tokens.json"
TOKENIZER_CONFIG_FILE = "tokenizer_config.json"
# Fast tokenizers (provided by HuggingFace tokenizer's library) can be saved in a single file
FULL_TOKENIZER_FILE = "tokenizer.json"
class ExplicitEnum(Enum):
""" Enum with more explicit error message for missing values.
"""
@classmethod
def _missing_(cls, value):
raise ValueError(
"%r is not a valid %s, please select one of %s"
% (value, cls.__name__, str(list(cls._value2member_map_.keys())))
)
class TruncationStrategy(ExplicitEnum):
ONLY_FIRST = "only_first"
ONLY_SECOND = "only_second"
LONGEST_FIRST = "longest_first"
DO_NOT_TRUNCATE = "do_not_truncate"
class PaddingStrategy(ExplicitEnum):
LONGEST = "longest"
MAX_LENGTH = "max_length"
DO_NOT_PAD = "do_not_pad"
class TensorType(ExplicitEnum):
PYTORCH = "pt"
TENSORFLOW = "tf"
NUMPY = "np"
class CharSpan(NamedTuple):
""" Character span in the original string
Args:
start: index of the first character in the original string
end: index of the character following the last character in the original string
"""
start: int
end: int
class TokenSpan(NamedTuple):
""" Token span in an encoded string (list of tokens)
Args:
start: index of the first token in the span
end: index of the token following the last token in the span
"""
start: int
end: int
class BatchEncoding(UserDict):
""" BatchEncoding hold the output of the encode and batch_encode methods (tokens, attention_masks, etc).
This class is derived from a python Dictionary and can be used as a dictionnary.
In addition, this class expose utility methods to map from word/char space to token space.
Args:
data (:obj:`dict`): Dictionary of lists/arrays returned by the encode/batch_encode methods ('input_ids', 'attention_mask'...)
encoding (:obj:`EncodingFast`, :obj:`list(EncodingFast)`, `optional`, defaults to :obj:`None`):
If the tokenizer is a fast tokenizer which outputs additional informations like mapping from word/char space to token space
the `EncodingFast` instance or list of instance (for batches) hold these informations.
tensor_type (:obj:`Union[None, str, TensorType]`, `optional`, defaults to :obj:`None`):
You can give a tensor_type here to convert the lists of integers in PyTorch/TF/Numpy Tensors at initialization
prepend_batch_axis (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to add a batch axis when converting in Tensors (see :obj:`tensor_type` above)
"""
def __init__(
self,
data: Optional[Dict[str, Any]] = None,
encoding: Optional[Union[EncodingFast, Sequence[EncodingFast]]] = None,
tensor_type: Union[None, str, TensorType] = None,
prepend_batch_axis: bool = False,
):
super().__init__(data)
if isinstance(encoding, EncodingFast):
encoding = [encoding]
self._encodings = encoding
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
@property
def is_fast(self):
"""
Indicate if this BatchEncoding was generated from the result of a PreTrainedTokenizerFast
Returns: True if generated from subclasses of PreTrainedTokenizerFast, else otherwise
"""
return self._encodings is not None
def __getitem__(self, item: Union[int, str]) -> EncodingFast:
""" If the key is a string, get the value of the dict associated to `key` ('input_ids', 'attention_mask'...)
If the key is an integer, get the EncodingFast for batch item with index `key`
"""
if isinstance(item, str):
return self.data[item]
elif self._encodings is not None:
return self._encodings[item]
else:
raise KeyError(
"Indexing with integers (to access backend Encoding for a given batch index) "
"is not available when using Python based tokenizers"
)
def __getattr__(self, item: str):
try:
return self.data[item]
except KeyError:
raise AttributeError
def __getstate__(self):
return {"data": self.data, "encodings": self._encodings}
def __setstate__(self, state):
if "data" in state:
self.data = state["data"]
if "encodings" in state:
self._encodings = state["encodings"]
def keys(self):
return self.data.keys()
def values(self):
return self.data.values()
def items(self):
return self.data.items()
# After this point:
# Extended properties and methods only available for fast (Rust-based) tokenizers
# provided by HuggingFace tokenizers library.
@property
def encodings(self) -> Optional[List[EncodingFast]]:
"""
Return the list all encoding from the tokenization process
Returns: List[EncodingFast] or None if input was tokenized through Python (i.e. not fast) tokenizer
"""
return self._encodings
def tokens(self, batch_index: int = 0) -> List[str]:
if not self._encodings:
raise ValueError("tokens() is not available when using Python based tokenizers")
return self._encodings[batch_index].tokens
def words(self, batch_index: int = 0) -> List[Optional[int]]:
if not self._encodings:
raise ValueError("words() is not available when using Python based tokenizers")
return self._encodings[batch_index].words
def token_to_word(self, batch_or_token_index: int, token_index: Optional[int] = None) -> int:
"""
Get the index of the word corresponding (i.e. comprising) to an encoded token
in a sequence of the batch.
Can be called as:
- ``self.token_to_word(token_index)`` if batch size is 1
- ``self.token_to_word(batch_index, token_index)`` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as
pre-tokenized sequences (i.e. words are defined by the user). In this case it allows
to easily associate encoded tokens with provided tokenized words.
Args:
batch_or_token_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the token in the sequence
token_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the token in the sequence.
Returns:
:obj:`int`:
index of the word in the input sequence.
"""
if not self._encodings:
raise ValueError("token_to_word() is not available when using Python based tokenizers")
if token_index is not None:
batch_index = batch_or_token_index
else:
batch_index = 0
token_index = batch_or_token_index
if batch_index < 0:
batch_index = self._batch_size + batch_index
if token_index < 0:
token_index = self._seq_len + token_index
return self._encodings[batch_index].token_to_word(token_index)
def word_to_tokens(self, batch_or_word_index: int, word_index: Optional[int] = None) -> TokenSpan:
"""
Get the encoded token span corresponding to a word in the sequence of the batch.
Token spans are returned as a TokenSpan NamedTuple with:
- start: index of the first token
- end: index of the token following the last token
Can be called as:
- ``self.word_to_tokens(word_index)`` if batch size is 1
- ``self.word_to_tokens(batch_index, word_index)`` if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as
pre-tokenized sequences (i.e. words are defined by the user). In this case it allows
to easily associate encoded tokens with provided tokenized words.
Args:
batch_or_word_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprises one sequence,
this can be the index of the word in the sequence
word_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the word in the sequence.
Returns:
:obj:`TokenSpan`:
Span of tokens in the encoded sequence.
:obj:`TokenSpan` are NamedTuple with:
- start: index of the first token
- end: index of the token following the last token
"""
if not self._encodings:
raise ValueError("word_to_tokens() is not available when using Python based tokenizers")
if word_index is not None:
batch_index = batch_or_word_index
else:
batch_index = 0
word_index = batch_or_word_index
if batch_index < 0:
batch_index = self._batch_size + batch_index
if word_index < 0:
word_index = self._seq_len + word_index
return TokenSpan(*(self._encodings[batch_index].word_to_tokens(word_index)))
def token_to_chars(self, batch_or_token_index: int, token_index: Optional[int] = None) -> CharSpan:
"""
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string associated to the token
- end: index of the character following the last character in the original string associated to the token
Can be called as:
- ``self.token_to_chars(token_index)`` if batch size is 1
- ``self.token_to_chars(batch_index, token_index)`` if batch size is greater or equal to 1
Args:
batch_or_token_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the token in the sequence
token_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the token or tokens in the sequence.
Returns:
:obj:`CharSpan`:
Span of characters in the original string.
:obj:`CharSpan` are NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
"""
if not self._encodings:
raise ValueError("token_to_chars() is not available when using Python based tokenizers")
if token_index is not None:
batch_index = batch_or_token_index
else:
batch_index = 0
token_index = batch_or_token_index
return CharSpan(*(self._encodings[batch_index].token_to_chars(token_index)))
def char_to_token(self, batch_or_char_index: int, char_index: Optional[int] = None) -> int:
"""
Get the index of the token in the encoded output comprising a character
in the original string for a sequence of the batch.
Can be called as:
- ``self.char_to_token(char_index)`` if batch size is 1
- ``self.char_to_token(batch_index, char_index)`` if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as
pre-tokenized sequences (i.e. words are defined by the user). In this case it allows
to easily associate encoded tokens with provided tokenized words.
Args:
batch_or_char_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the word in the sequence
char_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the word in the sequence.
Returns:
:obj:`int`: Index of the token.
"""
if not self._encodings:
raise ValueError("char_to_token() is not available when using Python based tokenizers")
if char_index is not None:
batch_index = batch_or_char_index
else:
batch_index = 0
char_index = batch_or_char_index
return self._encodings[batch_index].char_to_token(char_index)
def word_to_chars(self, batch_or_word_index: int, word_index: Optional[int] = None) -> CharSpan:
"""
Get the character span in the original string corresponding to given word in a sequence
of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
Can be called as:
- ``self.word_to_chars(word_index)`` if batch size is 1
- ``self.word_to_chars(batch_index, word_index)`` if batch size is greater or equal to 1
Args:
batch_or_word_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the word in the sequence
word_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the word in the sequence.
Returns:
:obj:`CharSpan` or :obj:`List[CharSpan]`:
Span(s) of the associated character or characters in the string.
CharSpan are NamedTuple with:
- start: index of the first character associated to the token in the original string
- end: index of the character following the last character associated to the token in the original string
"""
if not self._encodings:
raise ValueError("word_to_chars() is not available when using Python based tokenizers")
if word_index is not None:
batch_index = batch_or_word_index
else:
batch_index = 0
word_index = batch_or_word_index
return CharSpan(*(self._encodings[batch_index].word_to_chars(word_index)))
def char_to_word(self, batch_or_char_index: int, char_index: Optional[int] = None) -> int:
"""
Get the word in the original string corresponding to a character in the original string of
a sequence of the batch.
Can be called as:
- ``self.char_to_word(char_index)`` if batch size is 1
- ``self.char_to_word(batch_index, char_index)`` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as
pre-tokenized sequences (i.e. words are defined by the user). In this case it allows
to easily associate encoded tokens with provided tokenized words.
Args:
batch_or_char_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the character in the orginal string.
char_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index
of the character in the orginal string.
Returns:
:obj:`int` or :obj:`List[int]`:
Index or indices of the associated encoded token(s).
"""
if not self._encodings:
raise ValueError("char_to_word() is not available when using Python based tokenizers")
if char_index is not None:
batch_index = batch_or_char_index
else:
batch_index = 0
char_index = batch_or_char_index
return self._encodings[batch_index].char_to_word(char_index)
def convert_to_tensors(self, tensor_type: Union[None, str, TensorType], prepend_batch_axis: bool = False):
if tensor_type is None:
return self
# Convert to TensorType
if not isinstance(tensor_type, TensorType):
tensor_type = TensorType(tensor_type)
# Get a function reference for the correct framework
if tensor_type == TensorType.TENSORFLOW and is_tf_available():
as_tensor = tf.constant
elif tensor_type == TensorType.PYTORCH and is_torch_available():
as_tensor = torch.tensor
elif tensor_type == TensorType.NUMPY:
as_tensor = np.asarray
else:
raise ImportError(
"Unable to convert output to tensors format {}, PyTorch or TensorFlow is not available.".format(
tensor_type
)
)
# Do the tensor conversion in batch
for key, value in self.items():
try:
if prepend_batch_axis:
value = [value]
tensor = as_tensor(value)
# at-least2d
if tensor.ndim > 2:
tensor = tensor.squeeze(0)
elif tensor.ndim < 2:
tensor = tensor[None, :]
self[key] = tensor
except: # noqa E722
raise ValueError(
"Unable to create tensor, you should probably activate truncation and/or padding "
"with 'padding=True' 'truncation=True' to have batched tensors with the same length."
)
return self
@torch_required
def to(self, device: str):
"""Send all values to device by calling v.to(device)"""
self.data = {k: v.to(device) for k, v in self.data.items()}
return self
# class AddedToken(UserString):
# """ AddedToken represents a token to be added to a Tokenizer
# An AddedToken can have special options defining the way it should behave.
# Args:
# content: str:
# The content of the token
# single_word: bool
# Whether this token should only match against single word. If True,
# this token will never match inside of a word.
# lstrip: bool
# Whether this token should strip all potential whitespaces on the left side.
# If True, this token will greedily match any whitespace on the left and then strip
# them out.
# rstrip: bool
# Whether this token should strip all potential whitespaces on the right side.
# If True, this token will greedily match any whitespace on the right and then strip
# them out.
# """
# def __init__(
# self, data: str, single_word: bool = False, lstrip: bool = False, rstrip: bool = False,
# ):
# super().__init__(data)
# self._single_word = single_word
# self._lstrip = lstrip
# self._rstrip = rstrip
# def lower(self):
# return AddedToken(self.data.lower(), self._single_word, self._lstrip, self._rstrip)
class SpecialTokensMixin:
""" SpecialTokensMixin is derived by ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` and
handles specific behaviors related to special tokens. In particular, this class hold the
attributes which can be used to directly access to these special tokens in a
model-independant manner and allow to set and update the special tokens.
"""
SPECIAL_TOKENS_ATTRIBUTES = [
"bos_token",
"eos_token",
"unk_token",
"sep_token",
"pad_token",
"cls_token",
"mask_token",
"additional_special_tokens",
]
def __init__(self, verbose=True, **kwargs):
self._bos_token = None
self._eos_token = None
self._unk_token = None
self._sep_token = None
self._pad_token = None
self._cls_token = None
self._mask_token = None
self._pad_token_type_id = 0
self._additional_special_tokens = []
self.verbose = verbose
# We directly set the hidden value to allow initialization with special tokens
# which are not yet in the vocabulary. Necesssary for serialization/de-serialization
# TODO clean this up at some point (probably by sitching to fast tokenizers)
for key, value in kwargs.items():
if key in self.SPECIAL_TOKENS_ATTRIBUTES:
if key == "additional_special_tokens":
assert isinstance(value, (list, tuple)) and all(isinstance(t, str) for t in value)
setattr(self, key, value)
elif isinstance(value, (str, AddedToken)):
setattr(self, key, value)
else:
raise TypeError(
"special token {} has to be either str or AddedToken but got: {}".format(key, type(value))
)
def sanitize_special_tokens(self) -> int:
""" Make sure that all the special tokens attributes of the tokenizer (tokenizer.mask_token, tokenizer.cls_token, ...)
are in the vocabulary. Add the missing ones to the vocabulary if needed.
Return:
Number of tokens added in the vocaulary during the operation.
"""
return self.add_tokens(self.all_special_tokens_extended, special_tokens=True)
def add_special_tokens(self, special_tokens_dict: Dict[str, Union[str, AddedToken]]) -> int:
"""
Add a dictionary of special tokens (eos, pad, cls...) to the encoder and link them
to class attributes. If special tokens are NOT in the vocabulary, they are added
to it (indexed starting from the last index of the current vocabulary).
Using `add_special_tokens` will ensure your special tokens can be used in several ways:
- special tokens are carefully handled by the tokenizer (they are never split)
- you can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be '[CLS]' and XLM's one is also registered to be '')
Args:
special_tokens_dict: dict of string. Keys should be in the list of predefined special attributes:
[``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``,
``additional_special_tokens``].
Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them).
Returns:
Number of tokens added to the vocabulary.
Examples::
# Let's see how to add a new classification token to GPT-2
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
special_tokens_dict = {'cls_token': ''}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
assert tokenizer.cls_token == ''
"""
if not special_tokens_dict:
return 0
added_tokens = 0
for key, value in special_tokens_dict.items():
assert key in self.SPECIAL_TOKENS_ATTRIBUTES
if self.verbose:
logger.info("Assigning %s to the %s key of the tokenizer", value, key)
setattr(self, key, value)
if key == "additional_special_tokens":
assert isinstance(value, (list, tuple)) and all(
isinstance(t, (str, AddedToken)) for t in value
), f"Tokens {value} for key {key} should all be str or AddedToken instances"
added_tokens += self.add_tokens(value, special_tokens=True)
else:
assert isinstance(
value, (str, AddedToken)
), f"Token {value} for key {key} should be a str or an AddedToken instance"
added_tokens += self.add_tokens([value], special_tokens=True)
return added_tokens
def add_tokens(self, new_tokens: Union[str, AddedToken, List[str], List[AddedToken]], special_tokens=False) -> int:
"""
Add a list of new tokens to the tokenizer class. If the new tokens are not in the
vocabulary, they are added to it with indices starting from length of the current vocabulary.
Args:
new_tokens: string or list of string or :class:`~transformers.AddedToken`. Each string is a token to add.
Tokens are only added if they are not already in the vocabulary. AddedToken wrap a string token to
let you personnalize it's behavior (Whether this token should only match against single word, whether
this token should strip all potential whitespaces on the left side, Whether this token should strip
all potential whitespaces on the right side...).
special_token: can be used to specify if the token is a special token. This mostly change the normalization
behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance)
See details for :class:`~transformers.AddedToken` in HuggingFace tokenizers library.
Returns:
Number of tokens added to the vocabulary.
Examples::
# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
"""
if not new_tokens:
return 0
if not isinstance(new_tokens, (list, tuple)):
new_tokens = [new_tokens]
return self._add_tokens(new_tokens, special_tokens=special_tokens)
@property
def bos_token(self):
""" Beginning of sentence token (string). Log an error if used while not having been set. """
if self._bos_token is None and self.verbose:
logger.error("Using bos_token, but it is not set yet.")
return None
return str(self._bos_token)
@property
def eos_token(self):
""" End of sentence token (string). Log an error if used while not having been set. """
if self._eos_token is None and self.verbose:
logger.error("Using eos_token, but it is not set yet.")
return None
return str(self._eos_token)
@property
def unk_token(self):
""" Unknown token (string). Log an error if used while not having been set. """
if self._unk_token is None and self.verbose:
logger.error("Using unk_token, but it is not set yet.")
return None
return str(self._unk_token)
@property
def sep_token(self):
""" Separation token (string). E.g. separate context and query in an input sequence. Log an error if used while not having been set. """
if self._sep_token is None and self.verbose:
logger.error("Using sep_token, but it is not set yet.")
return None
return str(self._sep_token)
@property
def pad_token(self):
""" Padding token (string). Log an error if used while not having been set. """
if self._pad_token is None and self.verbose:
logger.error("Using pad_token, but it is not set yet.")
return None
return str(self._pad_token)
@property
def cls_token(self):
""" Classification token (string). E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set. """
if self._cls_token is None and self.verbose:
logger.error("Using cls_token, but it is not set yet.")
return None
return str(self._cls_token)
@property
def mask_token(self):
""" Mask token (string). E.g. when training a model with masked-language modeling. Log an error if used while not having been set. """
if self._mask_token is None and self.verbose:
logger.error("Using mask_token, but it is not set yet.")
return None
return str(self._mask_token)
@property
def additional_special_tokens(self):
""" All the additional special tokens you may want to use (list of strings). Log an error if used while not having been set. """
if self._additional_special_tokens is None and self.verbose:
logger.error("Using additional_special_tokens, but it is not set yet.")
return None
return [str(tok) for tok in self._additional_special_tokens]
@bos_token.setter
def bos_token(self, value):
self._bos_token = value
@eos_token.setter
def eos_token(self, value):
self._eos_token = value
@unk_token.setter
def unk_token(self, value):
self._unk_token = value
@sep_token.setter
def sep_token(self, value):
self._sep_token = value
@pad_token.setter
def pad_token(self, value):
self._pad_token = value
@cls_token.setter
def cls_token(self, value):
self._cls_token = value
@mask_token.setter
def mask_token(self, value):
self._mask_token = value
@additional_special_tokens.setter
def additional_special_tokens(self, value):
self._additional_special_tokens = value
@property
def bos_token_id(self):
""" Id of the beginning of sentence token in the vocabulary. Log an error if used while not having been set. """
if self._bos_token is None:
return None
return self.convert_tokens_to_ids(self.bos_token)
@property
def eos_token_id(self):
""" Id of the end of sentence token in the vocabulary. Log an error if used while not having been set. """
if self._eos_token is None:
return None
return self.convert_tokens_to_ids(self.eos_token)
@property
def unk_token_id(self):
""" Id of the unknown token in the vocabulary. Log an error if used while not having been set. """
if self._unk_token is None:
return None
return self.convert_tokens_to_ids(self.unk_token)
@property
def sep_token_id(self):
""" Id of the separation token in the vocabulary. E.g. separate context and query in an input sequence. Log an error if used while not having been set. """
if self._sep_token is None:
return None
return self.convert_tokens_to_ids(self.sep_token)
@property
def pad_token_id(self):
""" Id of the padding token in the vocabulary. Log an error if used while not having been set. """
if self._pad_token is None:
return None
return self.convert_tokens_to_ids(self.pad_token)
@property
def pad_token_type_id(self):
""" Id of the padding token type in the vocabulary."""
return self._pad_token_type_id
@property
def cls_token_id(self):
""" Id of the classification token in the vocabulary. E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set. """
if self._cls_token is None:
return None
return self.convert_tokens_to_ids(self.cls_token)
@property
def mask_token_id(self):
""" Id of the mask token in the vocabulary. E.g. when training a model with masked-language modeling. Log an error if used while not having been set. """
if self._mask_token is None:
return None
return self.convert_tokens_to_ids(self.mask_token)
@property
def additional_special_tokens_ids(self):
""" Ids of all the additional special tokens in the vocabulary (list of integers). Log an error if used while not having been set. """
return self.convert_tokens_to_ids(self.additional_special_tokens)
@property
def special_tokens_map(self):
""" A dictionary mapping special token class attribute (cls_token, unk_token...) to their
values ('', ''...)
Convert tokens of AddedToken type in string.
All returned tokens are strings
"""
set_attr = {}
for attr in self.SPECIAL_TOKENS_ATTRIBUTES:
attr_value = getattr(self, "_" + attr)
if attr_value:
set_attr[attr] = str(attr_value)
return set_attr
@property
def special_tokens_map_extended(self):
""" A dictionary mapping special token class attribute (cls_token, unk_token...) to their
values ('', ''...)
Keep the tokens as AddedToken if they are of this type.
AddedToken can be used to control more finely how special tokens are tokenized.
"""
set_attr = {}
for attr in self.SPECIAL_TOKENS_ATTRIBUTES:
attr_value = getattr(self, "_" + attr)
if attr_value:
set_attr[attr] = attr_value
return set_attr
@property
def all_special_tokens(self):
""" List all the special tokens ('', ''...) mapped to class attributes
Convert tokens of AddedToken type in string.
All returned tokens are strings
(cls_token, unk_token...).
"""
all_toks = [str(s) for s in self.all_special_tokens_extended]
return all_toks
@property
def all_special_tokens_extended(self):
""" List all the special tokens ('', ''...) mapped to class attributes
Keep the tokens as AddedToken if they are of this type.
AddedToken can be used to control more finely how special tokens are tokenized.
"""
all_toks = []
set_attr = self.special_tokens_map_extended
for attr_value in set_attr.values():
all_toks = all_toks + (list(attr_value) if isinstance(attr_value, (list, tuple)) else [attr_value])
all_toks = list(set(all_toks))
return all_toks
@property
def all_special_ids(self):
""" List the vocabulary indices of the special tokens ('', ''...) mapped to
class attributes (cls_token, unk_token...).
"""
all_toks = self.all_special_tokens
all_ids = self.convert_tokens_to_ids(all_toks)
return all_ids
ENCODE_KWARGS_DOCSTRING = r"""
add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`):
If set to ``True``, the sequences will be encoded with the special tokens relative
to their model.
`padding` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`):
Activate and control padding. Accepts the following values:
* `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided),
* `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`)
* `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths)
`truncation` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`):
Activate and control truncation. Accepts the following values:
* `True` or `'longest_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided,
* `'only_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
* `'only_second'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
* `False` or `'do_not_truncate'` (default): No truncation (i.e. can output batch with sequences length greater than the model max admissible input size)
`max_length` (:obj:`Union[int, None]`, `optional`, defaults to :obj:`None`):
Control the length for padding/truncation. Accepts the following values
* `None` (default): This will use the predefined model max length if required by one of the truncation/padding parameters. If the model has no specific max input length (e.g. XLNet) truncation/padding to max length is deactivated.
* `any integer value` (e.g. `42`): Use this specific maximum length value if required by one of the truncation/padding parameters.
stride (:obj:`int`, `optional`, defaults to ``0``):
If set to a number along with max_length, the overflowing tokens returned when `return_overflowing_tokens=True`
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflow ing sequences.
The value of this argument defines the number of overlapping tokens.
is_pretokenized (:obj:`bool`, defaults to :obj:`False`):
Set to True to indicate the input is already tokenized
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`):
Can be set to 'tf', 'pt' or 'np' to return respectively TensorFlow :obj:`tf.constant`,
PyTorch :obj:`torch.Tensor` or Numpy :oj: `np.ndarray` instead of a list of python integers.
"""
ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
return_token_type_ids (:obj:`bool`, `optional`, defaults to :obj:`None`):
Whether to return token type IDs. If left to the default, will return the token type IDs according
to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute.
`What are token type IDs? <../glossary.html#token-type-ids>`_
return_attention_mask (:obj:`bool`, `optional`, defaults to :obj:`none`):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute.
`What are attention masks? <../glossary.html#attention-mask>`__
return_overflowing_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return overflowing token sequences (default False).
return_special_tokens_mask (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return special tokens mask information (default False).
return_offsets_mapping (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return (char_start, char_end) for each token (default False).
If using Python's tokenizer, this method will raise NotImplementedError.
This one is only available on fast tokenizers inheriting from PreTrainedTokenizerFast.
**kwargs: passed to the `self.tokenize()` method
Return:
A Dictionary of shape::
{
input_ids: list[int],
token_type_ids: list[int] if return_token_type_ids is True (default)
attention_mask: list[int] if return_attention_mask is True (default)
overflowing_tokens: list[int] if the tokenizer is a slow tokenize, else a List[List[int]] if a ``max_length`` is specified and ``return_overflowing_tokens=True``
special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True``
and return_special_tokens_mask is True
}
With the fields:
- ``input_ids``: list of token ids to be fed to a model
- ``token_type_ids``: list of token type ids to be fed to a model
- ``attention_mask``: list of indices specifying which tokens should be attended to by the model
- ``overflowing_tokens``: list of overflowing tokens sequences if a max length is specified and ``return_overflowing_tokens=True``.
- ``special_tokens_mask``: if adding special tokens, this is a list of [0, 1], with 0 specifying special added
tokens and 1 specifying sequence tokens.
"""
class PreTrainedTokenizerBase(SpecialTokensMixin):
""" Base class for slow and fast tokenizers.
Handle shared (mostly boiler plate) methods for slow and fast tokenizers.
"""
vocab_files_names: Dict[str, str] = {}
pretrained_vocab_files_map: Dict[str, Dict[str, str]] = {}
pretrained_init_configuration: Dict[str, Dict[str, Any]] = {}
max_model_input_sizes: Dict[str, int] = {}
model_input_names: List[str] = ["token_type_ids", "attention_mask"]
padding_side: str = "right"
def __init__(self, **kwargs):
# inputs and kwargs for saving and re-loading (see ``from_pretrained`` and ``save_pretrained``)
self.init_inputs = ()
self.init_kwargs = kwargs
# For backward compatibility we fallback to set model_max_length from max_len if provided
model_max_length = kwargs.pop("model_max_length", kwargs.pop("max_len", None))
self.model_max_length = model_max_length if model_max_length is not None else VERY_LARGE_INTEGER
# Padding side is right by default and overridden in subclasses. If specified in the kwargs, it is changed.
self.padding_side = kwargs.pop("padding_side", self.padding_side)
assert self.padding_side in [
"right",
"left",
], f"Padding side should be selected between 'right' and 'left', current value: {self.padding_side}"
self.model_input_names = kwargs.pop("model_input_names", self.model_input_names)
super().__init__(**kwargs)
@property
def max_len(self) -> int:
""" Kept here for backward compatibility.
Now renamed to `model_max_length` to avoid ambiguity.
"""
return self.model_max_length
@property
def max_len_single_sentence(self) -> int:
return self.model_max_length - self.num_special_tokens_to_add(pair=False)
@property
def max_len_sentences_pair(self) -> int:
return self.model_max_length - self.num_special_tokens_to_add(pair=True)
@max_len_single_sentence.setter
def max_len_single_sentence(self, value) -> int:
""" For backward compatibility, allow to try to setup 'max_len_single_sentence' """
if value == self.model_max_length - self.num_special_tokens_to_add(pair=False) and self.verbose:
logger.warning(
"Setting 'max_len_single_sentence' is now deprecated. " "This value is automatically set up."
)
else:
raise ValueError(
"Setting 'max_len_single_sentence' is now deprecated. " "This value is automatically set up."
)
@max_len_sentences_pair.setter
def max_len_sentences_pair(self, value) -> int:
""" For backward compatibility, allow to try to setup 'max_len_sentences_pair' """
if value == self.model_max_length - self.num_special_tokens_to_add(pair=True) and self.verbose:
logger.warning(
"Setting 'max_len_sentences_pair' is now deprecated. " "This value is automatically set up."
)
else:
raise ValueError(
"Setting 'max_len_sentences_pair' is now deprecated. " "This value is automatically set up."
)
@classmethod
def from_pretrained(cls, *inputs, **kwargs):
r"""
Instantiate a :class:`~transformers.PreTrainedTokenizer` (or a derived class) from a predefined tokenizer.
Args:
pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a predefined tokenizer to load from cache or download, e.g.: ``bert-base-uncased``.
- a string with the `identifier name` of a predefined tokenizer that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- a path to a `directory` containing vocabulary files required by the tokenizer, for instance saved using the :func:`~transformers.PreTrainedTokenizer.save_pretrained` method, e.g.: ``./my_model_directory/``.
- (not applicable to all derived classes, deprecated) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ``./my_model_directory/vocab.txt``.
cache_dir: (`optional`) string:
Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.
force_download: (`optional`) boolean, default False:
Force to (re-)download the vocabulary files and override the cached versions if they exists.
resume_download: (`optional`) boolean, default False:
Do not delete incompletely recieved file. Attempt to resume the download if such a file exists.
proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
inputs: (`optional`) positional arguments: will be passed to the Tokenizer ``__init__`` method.
kwargs: (`optional`) keyword arguments: will be passed to the Tokenizer ``__init__`` method. Can be used to set special tokens like ``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, ``additional_special_tokens``. See parameters in the doc string of :class:`~transformers.PreTrainedTokenizer` for details.
Examples::
# We can't instantiate directly the base class `PreTrainedTokenizer` so let's show our examples on a derived class: BertTokenizer
# Download vocabulary from S3 and cache.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Download vocabulary from S3 (user-uploaded) and cache.
tokenizer = BertTokenizer.from_pretrained('dbmdz/bert-base-german-cased')
# If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`)
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/')
# If the tokenizer uses a single vocabulary file, you can point directly to this file
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt')
# You can link tokens to special vocabulary when instantiating
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='')
# You should be sure '' is in the vocabulary when doing that.
# Otherwise use tokenizer.add_special_tokens({'unk_token': ''}) instead)
assert tokenizer.unk_token == ''
"""
return cls._from_pretrained(*inputs, **kwargs)
@classmethod
def _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs):
cache_dir = kwargs.pop("cache_dir", None)
force_download = kwargs.pop("force_download", False)
resume_download = kwargs.pop("resume_download", False)
proxies = kwargs.pop("proxies", None)
local_files_only = kwargs.pop("local_files_only", False)
s3_models = list(cls.max_model_input_sizes.keys())
vocab_files = {}
init_configuration = {}
if pretrained_model_name_or_path in s3_models:
# Get the vocabulary from AWS S3 bucket
for file_id, map_list in cls.pretrained_vocab_files_map.items():
vocab_files[file_id] = map_list[pretrained_model_name_or_path]
if (
cls.pretrained_init_configuration
and pretrained_model_name_or_path in cls.pretrained_init_configuration
):
init_configuration = cls.pretrained_init_configuration[pretrained_model_name_or_path].copy()
else:
# Get the vocabulary from local files
logger.info(
"Model name '{}' not found in model shortcut name list ({}). "
"Assuming '{}' is a path, a model identifier, or url to a directory containing tokenizer files.".format(
pretrained_model_name_or_path, ", ".join(s3_models), pretrained_model_name_or_path
)
)
if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
if len(cls.vocab_files_names) > 1:
raise ValueError(
"Calling {}.from_pretrained() with the path to a single file or url is not supported."
"Use a model identifier or the path to a directory instead.".format(cls.__name__)
)
logger.warning(
"Calling {}.from_pretrained() with the path to a single file or url is deprecated".format(
cls.__name__
)
)
file_id = list(cls.vocab_files_names.keys())[0]
vocab_files[file_id] = pretrained_model_name_or_path
else:
# At this point pretrained_model_name_or_path is either a directory or a model identifier name
additional_files_names = {
"added_tokens_file": ADDED_TOKENS_FILE,
"special_tokens_map_file": SPECIAL_TOKENS_MAP_FILE,
"tokenizer_config_file": TOKENIZER_CONFIG_FILE,
"full_tokenizer_file": FULL_TOKENIZER_FILE,
}
# Look for the tokenizer files
for file_id, file_name in {**cls.vocab_files_names, **additional_files_names}.items():
if os.path.isdir(pretrained_model_name_or_path):
full_file_name = os.path.join(pretrained_model_name_or_path, file_name)
if not os.path.exists(full_file_name):
logger.info("Didn't find file {}. We won't load it.".format(full_file_name))
full_file_name = None
else:
full_file_name = hf_bucket_url(
pretrained_model_name_or_path, filename=file_name, use_cdn=False
)
vocab_files[file_id] = full_file_name
# Get files from url, cache, or disk depending on the case
try:
resolved_vocab_files = {}
for file_id, file_path in vocab_files.items():
if file_path is None:
resolved_vocab_files[file_id] = None
else:
resolved_vocab_files[file_id] = cached_path(
file_path,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
local_files_only=local_files_only,
)
except EnvironmentError:
if pretrained_model_name_or_path in s3_models:
msg = "Couldn't reach server at '{}' to download vocabulary files."
else:
msg = (
"Model name '{}' was not found in tokenizers model name list ({}). "
"We assumed '{}' was a path or url to a directory containing vocabulary files "
"named {}, but couldn't find such vocabulary files at this path or url.".format(
pretrained_model_name_or_path,
", ".join(s3_models),
pretrained_model_name_or_path,
list(cls.vocab_files_names.values()),
)
)
raise EnvironmentError(msg)
if all(full_file_name is None for full_file_name in resolved_vocab_files.values()):
raise EnvironmentError(
"Model name '{}' was not found in tokenizers model name list ({}). "
"We assumed '{}' was a path, a model identifier, or url to a directory containing vocabulary files "
"named {} but couldn't find such vocabulary files at this path or url.".format(
pretrained_model_name_or_path,
", ".join(s3_models),
pretrained_model_name_or_path,
list(cls.vocab_files_names.values()),
)
)
for file_id, file_path in vocab_files.items():
if file_path == resolved_vocab_files[file_id]:
logger.info("loading file {}".format(file_path))
else:
logger.info("loading file {} from cache at {}".format(file_path, resolved_vocab_files[file_id]))
# Prepare tokenizer initialization kwargs
# Did we saved some inputs and kwargs to reload ?
tokenizer_config_file = resolved_vocab_files.pop("tokenizer_config_file", None)
if tokenizer_config_file is not None:
with open(tokenizer_config_file, encoding="utf-8") as tokenizer_config_handle:
init_kwargs = json.load(tokenizer_config_handle)
saved_init_inputs = init_kwargs.pop("init_inputs", ())
if not init_inputs:
init_inputs = saved_init_inputs
else:
init_kwargs = init_configuration
# Update with newly provided kwargs
init_kwargs.update(kwargs)
# Set max length if needed
if pretrained_model_name_or_path in cls.max_model_input_sizes:
# if we're using a pretrained model, ensure the tokenizer
# wont index sequences longer than the number of positional embeddings
model_max_length = cls.max_model_input_sizes[pretrained_model_name_or_path]
if model_max_length is not None and isinstance(model_max_length, (int, float)):
init_kwargs["model_max_length"] = min(init_kwargs.get("model_max_length", int(1e30)), model_max_length)
# Merge resolved_vocab_files arguments in init_kwargs.
added_tokens_file = resolved_vocab_files.pop("added_tokens_file", None)
for args_name, file_path in resolved_vocab_files.items():
if args_name not in init_kwargs:
init_kwargs[args_name] = file_path
# Instantiate tokenizer.
try:
tokenizer = cls(*init_inputs, **init_kwargs)
except OSError:
raise OSError(
"Unable to load vocabulary from file. "
"Please check that the provided vocabulary is accessible and not corrupted."
)
# Save inputs and kwargs for saving and re-loading with ``save_pretrained``
tokenizer.init_inputs = init_inputs
tokenizer.init_kwargs = init_kwargs
# If there is a complementary special token map, load it
special_tokens_map_file = resolved_vocab_files.pop("special_tokens_map_file", None)
if special_tokens_map_file is not None:
with open(special_tokens_map_file, encoding="utf-8") as special_tokens_map_handle:
special_tokens_map = json.load(special_tokens_map_handle)
for key, value in special_tokens_map.items():
if isinstance(value, dict):
value = AddedToken(**value)
setattr(tokenizer, key, value)
# Add supplementary tokens.
special_tokens = tokenizer.all_special_tokens
if added_tokens_file is not None:
with open(added_tokens_file, encoding="utf-8") as added_tokens_handle:
added_tok_encoder = json.load(added_tokens_handle)
# Sort added tokens by index
added_tok_encoder_sorted = list(sorted(added_tok_encoder.items(), key=lambda x: x[1]))
for token, index in added_tok_encoder_sorted:
assert index == len(tokenizer), (
f"Non-consecutive added token '{token}' found. "
f"Should have index {len(tokenizer)} but has index {index} in saved vocabulary."
)
tokenizer.add_tokens(token, special_tokens=bool(token in special_tokens))
# Check all our special tokens are registrered as "no split" token (we don't cut them) and are in the vocab
added_tokens = tokenizer.sanitize_special_tokens()
if added_tokens:
logger.warning(
"Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained."
)
return tokenizer
def save_pretrained(self, save_directory) -> Tuple[str]:
""" Save the tokenizer vocabulary files together with:
- added tokens,
- special-tokens-to-class-attributes-mapping,
- tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).
Warning: This won't save modifications you may have applied to the tokenizer after the instantiation
(e.g. modifying tokenizer.do_lower_case after creation).
This method make sure the full tokenizer can then be re-loaded using the
:func:`~transformers.PreTrainedTokenizer.from_pretrained` class method.
"""
if os.path.isfile(save_directory):
logger.error("Provided path ({}) should be a directory, not a file".format(save_directory))
return
os.makedirs(save_directory, exist_ok=True)
special_tokens_map_file = os.path.join(save_directory, SPECIAL_TOKENS_MAP_FILE)
added_tokens_file = os.path.join(save_directory, ADDED_TOKENS_FILE)
tokenizer_config_file = os.path.join(save_directory, TOKENIZER_CONFIG_FILE)
tokenizer_config = copy.deepcopy(self.init_kwargs)
if len(self.init_inputs) > 0:
tokenizer_config["init_inputs"] = copy.deepcopy(self.init_inputs)
for file_id in self.vocab_files_names.keys():
tokenizer_config.pop(file_id, None)
with open(tokenizer_config_file, "w", encoding="utf-8") as f:
f.write(json.dumps(tokenizer_config, ensure_ascii=False))
with open(special_tokens_map_file, "w", encoding="utf-8") as f:
write_dict = {}
for key, value in self.special_tokens_map_extended.items():
if isinstance(value, AddedToken):
write_dict[key] = value.__getstate__()
else:
write_dict[key] = value
f.write(json.dumps(write_dict, ensure_ascii=False))
added_vocab = self.get_added_vocab()
if added_vocab:
with open(added_tokens_file, "w", encoding="utf-8") as f:
out_str = json.dumps(added_vocab, ensure_ascii=False)
f.write(out_str)
vocab_files = self.save_vocabulary(save_directory)
return vocab_files + (special_tokens_map_file, added_tokens_file)
@add_end_docstrings(
ENCODE_KWARGS_DOCSTRING,
"""
**kwargs: passed to the `self.tokenize()` method.
""",
)
def encode(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str] = False,
truncation: Union[bool, str] = False,
max_length: Optional[int] = None,
stride: int = 0,
return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs
):
"""
Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing ``self.convert_tokens_to_ids(self.tokenize(text))``.
Args:
text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method)
text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`, defaults to :obj:`None`):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized
string using the `tokenize` method) or a list of integers (tokenized string ids using the
`convert_tokens_to_ids` method)
"""
encoded_inputs = self.encode_plus(
text,
text_pair=text_pair,
add_special_tokens=add_special_tokens,
padding=padding,
truncation=truncation,
max_length=max_length,
stride=stride,
return_tensors=return_tensors,
**kwargs,
)
return encoded_inputs["input_ids"]
def num_special_tokens_to_add(self, pair: bool = False) -> int:
raise NotImplementedError
def _get_padding_truncation_strategies(
self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs
):
""" Find the correct padding/truncation strategy with backward compatibility
for old arguments (truncation_strategy and pad_to_max_length) and behaviors.
"""
old_truncation_strategy = kwargs.pop("truncation_strategy", "do_not_truncate")
old_pad_to_max_length = kwargs.pop("pad_to_max_length", False)
# Backward compatibility for previous behavior, maybe we should deprecate it:
# If you only set max_length, it activates truncation for max_length
if max_length is not None and padding is False and truncation is False:
if verbose:
logger.warning(
"Truncation was not explicitely activated but `max_length` is provided a specific value, "
"please use `truncation=True` to explicitely truncate examples to max length. "
"Defaulting to 'longest_first' truncation strategy. "
"If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy "
"more precisely by providing a specific strategy to `truncation`."
)
truncation = "longest_first"
# Get padding strategy
if padding is False and old_pad_to_max_length:
if verbose:
warnings.warn(
"The `pad_to_max_length` argument is deprecated and will be removed in a future version, "
"use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or "
"use `padding='max_length'` to pad to a max length. In this case, you can give a specific "
"length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the "
"maximal input size of the model (e.g. 512 for Bert).",
DeprecationWarning,
)
if max_length is None:
padding_strategy = PaddingStrategy.LONGEST
else:
padding_strategy = PaddingStrategy.MAX_LENGTH
elif padding is not False:
if padding is True:
padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch
elif not isinstance(padding, PaddingStrategy):
padding_strategy = PaddingStrategy(padding)
else:
padding_strategy = PaddingStrategy.DO_NOT_PAD
# Get truncation strategy
if truncation is False and old_truncation_strategy != "do_not_truncate":
if verbose:
warnings.warn(
"The `truncation_strategy` argument is deprecated and will be removed in a future version, "
"use `truncation=True` to truncate examples to a max length. You can give a specific "
"length with `max_length` (e.g. `max_length=45`) or leave max_length to None to truncate to the "
"maximal input size of the model (e.g. 512 for Bert). "
" If you have pairs of inputs, you can give a specific truncation strategy selected among "
"`truncation='only_first'` (will only truncate the first sentence in the pairs) "
"`truncation='only_second'` (will only truncate the second sentence in the pairs) "
"or `truncation='longest_first'` (will iteratively remove tokens from the longest sentence in the pairs).",
DeprecationWarning,
)
truncation_strategy = TruncationStrategy(old_truncation_strategy)
elif truncation is not False:
if truncation is True:
truncation_strategy = (
TruncationStrategy.LONGEST_FIRST
) # Default to truncate the longest sequences in pairs of inputs
elif not isinstance(truncation, TruncationStrategy):
truncation_strategy = TruncationStrategy(truncation)
else:
truncation_strategy = TruncationStrategy.DO_NOT_TRUNCATE
# Set max length if needed
if max_length is None:
if padding_strategy == PaddingStrategy.MAX_LENGTH:
if self.model_max_length > LARGE_INTEGER:
if verbose:
logger.warning(
"Asking to pad to max_length but no maximum length is provided and the model has no predefined maximum length. "
"Default to no padding."
)
padding_strategy = PaddingStrategy.DO_NOT_PAD
else:
max_length = self.model_max_length
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE:
if self.model_max_length > LARGE_INTEGER:
if verbose:
logger.warning(
"Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. "
"Default to no truncation."
)
truncation_strategy = TruncationStrategy.DO_NOT_TRUNCATE
else:
max_length = self.model_max_length
# Test if we have a padding token
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
raise ValueError(
"Asking to pad but the tokenizer does not have a padding token. "
"Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` "
"or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`."
)
# Check that we will truncate to a multiple of pad_to_multiple_of if both are provided
if (
truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE
and padding_strategy != PaddingStrategy.DO_NOT_PAD
and pad_to_multiple_of is not None
and max_length is not None
and (max_length % pad_to_multiple_of != 0)
):
raise ValueError(
f"Truncation and padding are both activated but "
f"truncation length ({max_length}) is not a multiple of pad_to_multiple_of ({pad_to_multiple_of})."
)
return padding_strategy, truncation_strategy, max_length, kwargs
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
text_pair: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str] = False,
truncation: Union[bool, str] = False,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
"""
Returns a dictionary containing the encoded sequence or sequence pair and additional information:
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified.
Args:
text (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]``):
The sequence or batch of sequences to be encoded.
Each sequence can be a string or a list of strings (pre-tokenized string).
If the sequences are provided as list of strings (pretokenized), you must set `is_pretokenized=True`
(to lift the ambiguity with a batch of sequences)
text_pair (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]``):
The sequence or batch of sequences to be encoded.
Each sequence can be a string or a list of strings (pre-tokenized string).
If the sequences are provided as list of strings (pretokenized), you must set `is_pretokenized=True`
(to lift the ambiguity with a batch of sequences)
"""
# Input type checking for clearer error
assert isinstance(text, str) or (
isinstance(text, (list, tuple))
and (
len(text) == 0
or (
isinstance(text[0], str)
or (isinstance(text[0], (list, tuple)) and (len(text[0]) == 0 or isinstance(text[0][0], str)))
)
)
), (
"text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
"or `List[List[str]]` (batch of pretokenized examples)."
)
assert (
text_pair is None
or isinstance(text_pair, str)
or (
isinstance(text_pair, (list, tuple))
and (
len(text_pair) == 0
or (
isinstance(text_pair[0], str)
or (
isinstance(text_pair[0], (list, tuple))
and (len(text_pair[0]) == 0 or isinstance(text_pair[0][0], str))
)
)
)
)
), (
"text_pair input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
"or `List[List[str]]` (batch of pretokenized examples)."
)
is_batched = bool(
(not is_pretokenized and isinstance(text, (list, tuple)))
or (is_pretokenized and isinstance(text, (list, tuple)) and text and isinstance(text[0], (list, tuple)))
)
if is_batched:
batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
return self.batch_encode_plus(
batch_text_or_text_pairs=batch_text_or_text_pairs,
add_special_tokens=add_special_tokens,
padding=padding,
truncation=truncation,
max_length=max_length,
stride=stride,
is_pretokenized=is_pretokenized,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors=return_tensors,
return_token_type_ids=return_token_type_ids,
return_attention_mask=return_attention_mask,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_offsets_mapping=return_offsets_mapping,
return_length=return_length,
verbose=verbose,
**kwargs,
)
else:
return self.encode_plus(
text=text,
text_pair=text_pair,
add_special_tokens=add_special_tokens,
padding=padding,
truncation=truncation,
max_length=max_length,
stride=stride,
is_pretokenized=is_pretokenized,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors=return_tensors,
return_token_type_ids=return_token_type_ids,
return_attention_mask=return_attention_mask,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_offsets_mapping=return_offsets_mapping,
return_length=return_length,
verbose=verbose,
**kwargs,
)
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def encode_plus(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str] = False,
truncation: Union[bool, str] = False,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
"""
Returns a dictionary containing the encoded sequence or sequence pair and additional information:
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified.
Args:
text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]` (the later only for not-fast tokenizers)):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method)
text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`, defaults to :obj:`None`):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized
string using the `tokenize` method) or a list of integers (tokenized string ids using the
`convert_tokens_to_ids` method)
"""
# Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
padding=padding,
truncation=truncation,
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
verbose=verbose,
**kwargs,
)
return self._encode_plus(
text=text,
text_pair=text_pair,
add_special_tokens=add_special_tokens,
padding_strategy=padding_strategy,
truncation_strategy=truncation_strategy,
max_length=max_length,
stride=stride,
is_pretokenized=is_pretokenized,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors=return_tensors,
return_token_type_ids=return_token_type_ids,
return_attention_mask=return_attention_mask,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_offsets_mapping=return_offsets_mapping,
return_length=return_length,
verbose=verbose,
**kwargs,
)
def _encode_plus(
self,
text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
raise NotImplementedError
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def batch_encode_plus(
self,
batch_text_or_text_pairs: Union[
List[TextInput],
List[TextInputPair],
List[PreTokenizedInput],
List[PreTokenizedInputPair],
List[EncodedInput],
List[EncodedInputPair],
],
add_special_tokens: bool = True,
padding: Union[bool, str] = False,
truncation: Union[bool, str] = False,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
"""
Returns a dictionary containing the encoded sequence or sequence pair and additional information:
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified.
Args:
batch_text_or_text_pairs (:obj:`List[str]`, :obj:`List[Tuple[str, str]]`,
:obj:`List[List[str]]`, :obj:`List[Tuple[List[str], List[str]]]`,
and for not-fast tokenizers, also:
:obj:`List[List[int]]`, :obj:`List[Tuple[List[int], List[int]]]`):
Batch of sequences or pair of sequences to be encoded.
This can be a list of string/string-sequences/int-sequences or a list of pair of
string/string-sequences/int-sequence (see details in encode_plus)
"""
# Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
padding=padding,
truncation=truncation,
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
verbose=verbose,
**kwargs,
)
return self._batch_encode_plus(
batch_text_or_text_pairs=batch_text_or_text_pairs,
add_special_tokens=add_special_tokens,
padding_strategy=padding_strategy,
truncation_strategy=truncation_strategy,
max_length=max_length,
stride=stride,
is_pretokenized=is_pretokenized,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors=return_tensors,
return_token_type_ids=return_token_type_ids,
return_attention_mask=return_attention_mask,
return_overflowing_tokens=return_overflowing_tokens,
return_special_tokens_mask=return_special_tokens_mask,
return_offsets_mapping=return_offsets_mapping,
return_length=return_length,
verbose=verbose,
**kwargs,
)
def _batch_encode_plus(
self,
batch_text_or_text_pairs: Union[
List[TextInput],
List[TextInputPair],
List[PreTokenizedInput],
List[PreTokenizedInputPair],
List[EncodedInput],
List[EncodedInputPair],
],
add_special_tokens: bool = True,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
max_length: Optional[int] = None,
stride: int = 0,
is_pretokenized: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
**kwargs
) -> BatchEncoding:
raise NotImplementedError
def pad(
self,
encoded_inputs: Union[
BatchEncoding,
List[BatchEncoding],
Dict[str, EncodedInput],
Dict[str, List[EncodedInput]],
List[Dict[str, EncodedInput]],
],
padding: Union[bool, str] = True,
max_length: Optional[int] = None,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
verbose: bool = True,
) -> BatchEncoding:
""" Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level
(with ``self.padding_side``, ``self.pad_token_id`` and ``self.pad_token_type_id``)
Args:
encoded_inputs: Dictionary of tokenized inputs (`Dict[str, List[int]]`) or batch of tokenized inputs.
Batch of tokenized inputs can be given as dicts of lists or lists of dicts, both work so you can
use ``tokenizer.pad()`` during pre-processing as well as in a PyTorch Dataloader collate function.
(`Dict[str, List[List[int]]]` or `List[Dict[str, List[int]]]`).
padding: Boolean or specific strategy to use for padding.
Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:
- 'longest' (or `True`) Pad to the longest sequence in the batch
- 'max_length': Pad to the max length (default)
- 'do_not_pad' (or `False`): Do not pad
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics)
return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`):
Can be set to 'tf', 'pt' or 'np' to return respectively TensorFlow :obj:`tf.constant`,
PyTorch :obj:`torch.Tensor` or Numpy :oj: `np.ndarray` instead of a list of python integers.
verbose (:obj:`bool`, `optional`, defaults to :obj:`True`):
Set to ``False`` to avoid printing infos and warnings.
"""
# If we have a list of dicts, let's convert it in a dict of lists
if isinstance(encoded_inputs, (list, tuple)) and isinstance(encoded_inputs[0], (dict, BatchEncoding)):
encoded_inputs = {key: [example[key] for example in encoded_inputs] for key in encoded_inputs[0].keys()}
assert "input_ids" in encoded_inputs, (
"You should supply an encoding or a list of encodings to this method. "
"An encoding is the output of one the encoding methods of the tokenizer, i.e. "
"__call__/encode_plus/batch_encode_plus. "
)
if not encoded_inputs["input_ids"]:
if return_attention_mask:
encoded_inputs["attention_mask"] = []
return encoded_inputs
# Convert padding_strategy in PaddingStrategy
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(
padding=padding, max_length=max_length, verbose=verbose
)
if encoded_inputs["input_ids"] and not isinstance(encoded_inputs["input_ids"][0], (list, tuple)):
encoded_inputs = self._pad(
encoded_inputs,
max_length=max_length,
padding_strategy=padding_strategy,
pad_to_multiple_of=pad_to_multiple_of,
return_attention_mask=return_attention_mask,
)
return BatchEncoding(encoded_inputs, tensor_type=return_tensors)
batch_size = len(encoded_inputs["input_ids"])
assert all(
len(v) == batch_size for v in encoded_inputs.values()
), "Some items in the output dictionnary have a different batch size than others."
if padding_strategy == PaddingStrategy.LONGEST:
max_length = max(len(inputs) for inputs in encoded_inputs["input_ids"])
padding_strategy = PaddingStrategy.MAX_LENGTH
batch_outputs = {}
for i in range(batch_size):
inputs = dict((k, v[i]) for k, v in encoded_inputs.items())
outputs = self._pad(
inputs,
max_length=max_length,
padding_strategy=padding_strategy,
pad_to_multiple_of=pad_to_multiple_of,
return_attention_mask=return_attention_mask,
)
for key, value in outputs.items():
if key not in batch_outputs:
batch_outputs[key] = []
batch_outputs[key].append(value)
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
def create_token_type_ids_from_sequences(self, token_ids_0: List, token_ids_1: Optional[List] = None) -> List[int]:
if token_ids_1 is None:
return len(token_ids_0) * [0]
return [0] * len(token_ids_0) + [1] * len(token_ids_1)
def build_inputs_with_special_tokens(self, token_ids_0: List, token_ids_1: Optional[List] = None) -> List:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks
by concatenating and adding special tokens. This implementation does not add special tokens.
"""
if token_ids_1 is None:
return token_ids_0
return token_ids_0 + token_ids_1
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def prepare_for_model(
self,
ids: List[int],
pair_ids: Optional[List[int]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str] = False,
truncation: Union[bool, str] = False,
max_length: Optional[int] = None,
stride: int = 0,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_token_type_ids: Optional[bool] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_length: bool = False,
verbose: bool = True,
prepend_batch_axis: bool = False,
**kwargs
) -> BatchEncoding:
""" Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model.
It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens
Args:
ids: list of tokenized input ids. Can be obtained from a string by chaining the
`tokenize` and `convert_tokens_to_ids` methods.
pair_ids: Optional second list of input ids. Can be obtained from a string by chaining the
`tokenize` and `convert_tokens_to_ids` methods.
"""
if "return_lengths" in kwargs:
if verbose:
warnings.warn(
"The PreTrainedTokenizerBase.prepare_for_model `return_lengths` parameter is deprecated. "
"Please use `return_length` instead.",
FutureWarning,
)
return_length = kwargs["return_lengths"]
# Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
padding=padding,
truncation=truncation,
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
verbose=verbose,
**kwargs,
)
pair = bool(pair_ids is not None)
len_ids = len(ids)
len_pair_ids = len(pair_ids) if pair else 0
# Load from model defaults
if return_token_type_ids is None:
return_token_type_ids = "token_type_ids" in self.model_input_names
if return_attention_mask is None:
return_attention_mask = "attention_mask" in self.model_input_names
encoded_inputs = {}
# Compute the total size of the returned encodings
total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0)
# Truncation: Handle max sequence length
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and total_len > max_length:
ids, pair_ids, overflowing_tokens = self.truncate_sequences(
ids,
pair_ids=pair_ids,
num_tokens_to_remove=total_len - max_length,
truncation_strategy=truncation_strategy,
stride=stride,
)
if return_overflowing_tokens:
encoded_inputs["overflowing_tokens"] = overflowing_tokens
encoded_inputs["num_truncated_tokens"] = total_len - max_length
# Add special tokens
if add_special_tokens:
sequence = self.build_inputs_with_special_tokens(ids, pair_ids)
token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids)
else:
sequence = ids + pair_ids if pair else ids
token_type_ids = [0] * len(ids) + ([1] * len(pair_ids) if pair else [])
# Build output dictionnary
encoded_inputs["input_ids"] = sequence
if return_token_type_ids:
encoded_inputs["token_type_ids"] = token_type_ids
if return_special_tokens_mask:
if add_special_tokens:
encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids)
else:
encoded_inputs["special_tokens_mask"] = [0] * len(sequence)
# Check lengths
if max_length is None and len(encoded_inputs["input_ids"]) > self.model_max_length and verbose:
logger.warning(
"Token indices sequence length is longer than the specified maximum sequence length "
"for this model ({} > {}). Running this sequence through the model will result in "
"indexing errors".format(len(ids), self.model_max_length)
)
# Padding
if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask:
encoded_inputs = self.pad(
encoded_inputs,
max_length=max_length,
padding=padding_strategy.value,
pad_to_multiple_of=pad_to_multiple_of,
return_attention_mask=return_attention_mask,
)
if return_length:
encoded_inputs["length"] = len(encoded_inputs["input_ids"])
batch_outputs = BatchEncoding(
encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis
)
return batch_outputs
def truncate_sequences(
self,
ids: List[int],
pair_ids: Optional[List[int]] = None,
num_tokens_to_remove: int = 0,
truncation_strategy: Union[str, TruncationStrategy] = "longest_first",
stride: int = 0,
) -> Tuple[List[int], List[int], List[int]]:
""" Truncates a sequence pair in place to the maximum length.
Args:
ids: list of tokenized input ids. Can be obtained from a string by chaining the
`tokenize` and `convert_tokens_to_ids` methods.
pair_ids: Optional second list of input ids. Can be obtained from a string by chaining the
`tokenize` and `convert_tokens_to_ids` methods.
num_tokens_to_remove (:obj:`int`, `optional`, defaults to ``0``):
number of tokens to remove using the truncation strategy
truncation_strategy (:obj:`string`, `optional`, defaults to "longest_first"):
String selected in the following options:
- 'longest_first' (default): Iteratively reduce the inputs sequence until the input is under max_length
starting from the longest one at each token (when there is a pair of input sequences).
Overflowing tokens only contains overflow from the first sequence.
- 'only_first': Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.
- 'only_second': Only truncate the second sequence
- 'do_not_truncate'
stride (:obj:`int`, `optional`, defaults to ``0``):
If set to a number along with max_length, the overflowing tokens returned will contain some tokens
from the main sequence returned. The value of this argument defines the number of additional tokens.
"""
if num_tokens_to_remove <= 0:
return ids, pair_ids, []
if not isinstance(truncation_strategy, TruncationStrategy):
truncation_strategy = TruncationStrategy(truncation_strategy)
overflowing_tokens = []
if truncation_strategy == TruncationStrategy.LONGEST_FIRST:
for _ in range(num_tokens_to_remove):
if pair_ids is None or len(ids) > len(pair_ids):
if not overflowing_tokens:
window_len = min(len(ids), stride + 1)
else:
window_len = 1
overflowing_tokens.extend(ids[-window_len:])
ids = ids[:-1]
else:
if not overflowing_tokens:
window_len = min(len(pair_ids), stride + 1)
else:
window_len = 1
overflowing_tokens.extend(pair_ids[-window_len:])
pair_ids = pair_ids[:-1]
elif truncation_strategy == TruncationStrategy.ONLY_FIRST:
if len(ids) > num_tokens_to_remove:
window_len = min(len(ids), stride + num_tokens_to_remove)
overflowing_tokens = ids[-window_len:]
ids = ids[:-num_tokens_to_remove]
else:
logger.error(
f"We need to remove {num_tokens_to_remove} to truncate the input"
f"but the first sequence has a length {len(ids)}. "
f"Please select another truncation strategy than {truncation_strategy}, "
f"for instance 'longest_first' or 'only_second'."
)
elif truncation_strategy == TruncationStrategy.ONLY_SECOND and pair_ids is not None:
if len(pair_ids) > num_tokens_to_remove:
window_len = min(len(pair_ids), stride + num_tokens_to_remove)
overflowing_tokens = pair_ids[-window_len:]
pair_ids = pair_ids[:-num_tokens_to_remove]
else:
logger.error(
f"We need to remove {num_tokens_to_remove} to truncate the input"
f"but the second sequence has a length {len(pair_ids)}. "
f"Please select another truncation strategy than {truncation_strategy}, "
f"for instance 'longest_first' or 'only_first'."
)
return (ids, pair_ids, overflowing_tokens)
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
) -> dict:
""" Pad encoded inputs (on left/right and up to predefined legnth or max length in the batch)
Args:
encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
padding_strategy: PaddingStrategy to use for padding.
- PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- PaddingStrategy.DO_NOT_PAD: Do not pad
The tokenizer padding sides are defined in self.padding_side:
- 'left': pads on the left of the sequences
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
# Load from model defaults
if return_attention_mask is None:
return_attention_mask = "attention_mask" in self.model_input_names
if padding_strategy == PaddingStrategy.LONGEST:
max_length = len(encoded_inputs["input_ids"])
if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
needs_to_be_padded = (
padding_strategy != PaddingStrategy.DO_NOT_PAD and len(encoded_inputs["input_ids"]) != max_length
)
if needs_to_be_padded:
difference = max_length - len(encoded_inputs["input_ids"])
if self.padding_side == "right":
if return_attention_mask:
encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"]) + [0] * difference
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = (
encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference
)
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference
encoded_inputs["input_ids"] = encoded_inputs["input_ids"] + [self.pad_token_id] * difference
elif self.padding_side == "left":
if return_attention_mask:
encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["input_ids"])
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = [self.pad_token_type_id] * difference + encoded_inputs[
"token_type_ids"
]
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = [1] * difference + encoded_inputs["special_tokens_mask"]
encoded_inputs["input_ids"] = [self.pad_token_id] * difference + encoded_inputs["input_ids"]
else:
raise ValueError("Invalid padding strategy:" + str(self.padding_side))
else:
if return_attention_mask:
encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"])
return encoded_inputs
def batch_decode(self, sequences: List[List[int]], **kwargs) -> List[str]:
return [self.decode(seq, **kwargs) for seq in sequences]
def decode(
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str:
"""
Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary
with options to remove special tokens and clean up tokenization spaces.
Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``.
Args:
token_ids: list of tokenized input ids. Can be obtained using the `encode` or `encode_plus` methods.
skip_special_tokens: if set to True, will replace special tokens.
clean_up_tokenization_spaces: if set to True, will clean up the tokenization spaces.
"""
raise NotImplementedError
def get_special_tokens_mask(
self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods.
Args:
token_ids_0: list of ids (must not contain special tokens)
token_ids_1: Optional list of ids (must not contain special tokens), necessary when fetching sequence ids
for sequence pairs
already_has_special_tokens: (default False) Set to True if the token list is already formated with
special tokens for the model
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
assert already_has_special_tokens and token_ids_1 is None, (
"You cannot use ``already_has_special_tokens=False`` with this tokenizer. "
"Please use a slow (full python) tokenizer to activate this argument."
"Or set `return_special_token_mask=True` when calling the encoding method "
"to get the special tokens mask in any tokenizer. "
)
all_special_ids = self.all_special_ids # cache the property
special_tokens_mask = [1 if token in all_special_ids else 0 for token in token_ids_0]
return special_tokens_mask
@staticmethod
def clean_up_tokenization(out_string: str) -> str:
""" Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.
"""
out_string = (
out_string.replace(" .", ".")
.replace(" ?", "?")
.replace(" !", "!")
.replace(" ,", ",")
.replace(" ' ", "'")
.replace(" n't", "n't")
.replace(" 'm", "'m")
.replace(" 's", "'s")
.replace(" 've", "'ve")
.replace(" 're", "'re")
)
return out_string
================================================
FILE: data/dataset_refer_bert.py
================================================
import os
import sys
import torch.utils.data as data
import torch
from torchvision import transforms
from torch.autograd import Variable
import numpy as np
from PIL import Image
import torchvision.transforms.functional as TF
import random
from bert.tokenization_bert import BertTokenizer
import h5py
from refer.refer import REFER
from args import get_parser
# Dataset configuration initialization
parser = get_parser()
args = parser.parse_args()
class ReferDataset(data.Dataset):
def __init__(self,
args,
image_transforms=None,
target_transforms=None,
split='train',
eval_mode=False):
self.classes = []
self.image_transforms = image_transforms
self.target_transform = target_transforms
self.split = split
self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
self.max_tokens = 20
ref_ids = self.refer.getRefIds(split=self.split)
img_ids = self.refer.getImgIds(ref_ids)
all_imgs = self.refer.Imgs
self.imgs = list(all_imgs[i] for i in img_ids)
self.ref_ids = ref_ids
self.input_ids = []
self.attention_masks = []
self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
self.eval_mode = eval_mode
# if we are testing on a dataset, test all sentences of an object;
# o/w, we are validating during training, randomly sample one sentence for efficiency
for r in ref_ids:
ref = self.refer.Refs[r]
sentences_for_ref = []
attentions_for_ref = []
for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
sentence_raw = el['raw']
attention_mask = [0] * self.max_tokens
padded_input_ids = [0] * self.max_tokens
input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
# truncation of tokens
input_ids = input_ids[:self.max_tokens]
padded_input_ids[:len(input_ids)] = input_ids
attention_mask[:len(input_ids)] = [1]*len(input_ids)
sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
self.input_ids.append(sentences_for_ref)
self.attention_masks.append(attentions_for_ref)
def get_classes(self):
return self.classes
def __len__(self):
return len(self.ref_ids)
def __getitem__(self, index):
this_ref_id = self.ref_ids[index]
this_img_id = self.refer.getImgIds(this_ref_id)
this_img = self.refer.Imgs[this_img_id[0]]
img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
ref = self.refer.loadRefs(this_ref_id)
ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
annot = np.zeros(ref_mask.shape)
annot[ref_mask == 1] = 1
annot = Image.fromarray(annot.astype(np.uint8), mode="P")
if self.image_transforms is not None:
# resize, from PIL to tensor, and mean and std normalization
img, target = self.image_transforms(img, annot)
if self.eval_mode:
embedding = []
att = []
for s in range(len(self.input_ids[index])):
e = self.input_ids[index][s]
a = self.attention_masks[index][s]
embedding.append(e.unsqueeze(-1))
att.append(a.unsqueeze(-1))
tensor_embeddings = torch.cat(embedding, dim=-1)
attention_mask = torch.cat(att, dim=-1)
else:
choice_sent = np.random.choice(len(self.input_ids[index]))
tensor_embeddings = self.input_ids[index][choice_sent]
attention_mask = self.attention_masks[index][choice_sent]
return img, target, tensor_embeddings, attention_mask
================================================
FILE: demo_inference.py
================================================
image_path = './demo/demo.jpg'
sentence = 'the most handsome guy'
weights = './checkpoints/refcoco.pth'
device = 'cuda:0'
# pre-process the input image
from PIL import Image
import torchvision.transforms as T
import numpy as np
img = Image.open(image_path).convert("RGB")
img_ndarray = np.array(img) # (orig_h, orig_w, 3); for visualization
original_w, original_h = img.size # PIL .size returns width first and height second
image_transforms = T.Compose(
[
T.Resize(480),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
)
img = image_transforms(img).unsqueeze(0) # (1, 3, 480, 480)
img = img.to(device) # for inference (input)
# pre-process the raw sentence
from bert.tokenization_bert import BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence_tokenized = tokenizer.encode(text=sentence, add_special_tokens=True)
sentence_tokenized = sentence_tokenized[:20] # if the sentence is longer than 20, then this truncates it to 20 words
# pad the tokenized sentence
padded_sent_toks = [0] * 20
padded_sent_toks[:len(sentence_tokenized)] = sentence_tokenized
# create a sentence token mask: 1 for real words; 0 for padded tokens
attention_mask = [0] * 20
attention_mask[:len(sentence_tokenized)] = [1]*len(sentence_tokenized)
# convert lists to tensors
padded_sent_toks = torch.tensor(padded_sent_toks).unsqueeze(0) # (1, 20)
attention_mask = torch.tensor(attention_mask).unsqueeze(0) # (1, 20)
padded_sent_toks = padded_sent_toks.to(device) # for inference (input)
attention_mask = attention_mask.to(device) # for inference (input)
# initialize model and load weights
from bert.modeling_bert import BertModel
from lib import segmentation
# construct a mini args class; like from a config file
class args:
swin_type = 'base'
window12 = True
mha = ''
fusion_drop = 0.0
single_model = segmentation.__dict__['lavt'](pretrained='', args=args)
single_model.to(device)
model_class = BertModel
single_bert_model = model_class.from_pretrained('bert-base-uncased')
single_bert_model.pooler = None
checkpoint = torch.load(weights, map_location='cpu')
single_bert_model.load_state_dict(checkpoint['bert_model'])
single_model.load_state_dict(checkpoint['model'])
model = single_model.to(device)
bert_model = single_bert_model.to(device)
# inference
import torch.nn.functional as F
last_hidden_states = bert_model(padded_sent_toks, attention_mask=attention_mask)[0]
embedding = last_hidden_states.permute(0, 2, 1)
output = model(img, embedding, l_mask=attention_mask.unsqueeze(-1))
output = output.argmax(1, keepdim=True) # (1, 1, 480, 480)
output = F.interpolate(output.float(), (original_h, original_w)) # 'nearest'; resize to the original image size
output = output.squeeze() # (orig_h, orig_w)
output = output.cpu().data.numpy() # (orig_h, orig_w)
# show/save results
def overlay_davis(image, mask, colors=[[0, 0, 0], [255, 0, 0]], cscale=1, alpha=0.4):
from scipy.ndimage.morphology import binary_dilation
colors = np.reshape(colors, (-1, 3))
colors = np.atleast_2d(colors) * cscale
im_overlay = image.copy()
object_ids = np.unique(mask)
for object_id in object_ids[1:]:
# Overlay color on binary mask
foreground = image*alpha + np.ones(image.shape)*(1-alpha) * np.array(colors[object_id])
binary_mask = mask == object_id
# Compose image
im_overlay[binary_mask] = foreground[binary_mask]
# countours = skimage.morphology.binary.binary_dilation(binary_mask) - binary_mask
countours = binary_dilation(binary_mask) ^ binary_mask
# countours = cv2.dilate(binary_mask, cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))) - binary_mask
im_overlay[countours, :] = 0
return im_overlay.astype(image.dtype)
output = output.astype(np.uint8) # (orig_h, orig_w), np.uint8
# Overlay the mask on the image
visualization = overlay_davis(img_ndarray, output) # red
visualization = Image.fromarray(visualization)
# show the visualization
#visualization.show()
# Save the visualization
visualization.save('./demo/demo_result.jpg')
================================================
FILE: lib/_utils.py
================================================
from collections import OrderedDict
import sys
import torch
from torch import nn
from torch.nn import functional as F
from bert.modeling_bert import BertModel
class _LAVTSimpleDecode(nn.Module):
def __init__(self, backbone, classifier):
super(_LAVTSimpleDecode, self).__init__()
self.backbone = backbone
self.classifier = classifier
def forward(self, x, l_feats, l_mask):
input_shape = x.shape[-2:]
features = self.backbone(x, l_feats, l_mask)
x_c1, x_c2, x_c3, x_c4 = features
x = self.classifier(x_c4, x_c3, x_c2, x_c1)
x = F.interpolate(x, size=input_shape, mode='bilinear', align_corners=True)
return x
class LAVT(_LAVTSimpleDecode):
pass
###############################################
# LAVT One: put BERT inside the overall model #
###############################################
class _LAVTOneSimpleDecode(nn.Module):
def __init__(self, backbone, classifier, args):
super(_LAVTOneSimpleDecode, self).__init__()
self.backbone = backbone
self.classifier = classifier
self.text_encoder = BertModel.from_pretrained(args.ck_bert)
self.text_encoder.pooler = None
def forward(self, x, text, l_mask):
input_shape = x.shape[-2:]
### language inference ###
l_feats = self.text_encoder(text, attention_mask=l_mask)[0] # (6, 10, 768)
l_feats = l_feats.permute(0, 2, 1) # (B, 768, N_l) to make Conv1d happy
l_mask = l_mask.unsqueeze(dim=-1) # (batch, N_l, 1)
##########################
features = self.backbone(x, l_feats, l_mask)
x_c1, x_c2, x_c3, x_c4 = features
x = self.classifier(x_c4, x_c3, x_c2, x_c1)
x = F.interpolate(x, size=input_shape, mode='bilinear', align_corners=True)
return x
class LAVTOne(_LAVTOneSimpleDecode):
pass
================================================
FILE: lib/backbone.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.checkpoint as checkpoint
import numpy as np
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
from .mmcv_custom import load_checkpoint
from mmseg.utils import get_root_logger
class Mlp(nn.Module):
""" Multilayer perceptron."""
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
class WindowAttention(nn.Module):
""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports both of shifted and non-shifted window.
Args:
dim (int): Number of input channels.
window_size (tuple[int]): The height and width of the window.
num_heads (int): Number of attention heads.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
"""
def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
self.dim = dim
self.window_size = window_size # Wh, Ww
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
# define a parameter table of relative position bias
self.relative_position_bias_table = nn.Parameter(
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(self.window_size[0])
coords_w = torch.arange(self.window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += self.window_size[1] - 1
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
self.register_buffer("relative_position_index", relative_position_index)
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, mask=None):
""" Forward function.
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C) # cat op
x = self.proj(x)
x = self.proj_drop(x)
return x
class SwinTransformerBlock(nn.Module):
""" Swin Transformer Block.
Args:
dim (int): Number of input channels.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float, optional): Stochastic depth rate. Default: 0.0
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, num_heads, window_size=7, shift_size=0,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
act_layer=nn.GELU, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.num_heads = num_heads
self.window_size = window_size
self.shift_size = shift_size
self.mlp_ratio = mlp_ratio
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
self.H = None
self.W = None
def forward(self, x, mask_matrix):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
mask_matrix: Attention mask for cyclic shift.
"""
B, L, C = x.shape
H, W = self.H, self.W
assert L == H * W, "input feature has wrong size"
shortcut = x
x = self.norm1(x)
x = x.view(B, H, W, C)
# pad feature maps to multiples of window size
pad_l = pad_t = 0
pad_r = (self.window_size - W % self.window_size) % self.window_size
pad_b = (self.window_size - H % self.window_size) % self.window_size
x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
_, Hp, Wp, _ = x.shape
# cyclic shift
if self.shift_size > 0:
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
attn_mask = mask_matrix
else:
shifted_x = x
attn_mask = None
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
# reverse cyclic shift
if self.shift_size > 0:
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
else:
x = shifted_x
if pad_r > 0 or pad_b > 0:
x = x[:, :H, :W, :].contiguous()
x = x.view(B, H * W, C)
# FFN feed-forward network
x = shortcut + self.drop_path(x)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class PatchMerging(nn.Module):
""" Patch Merging Layer
Args:
dim (int): Number of input channels.
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
self.norm = norm_layer(4 * dim)
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
x = x.view(B, H, W, C)
# padding
pad_input = (H % 2 == 1) or (W % 2 == 1)
if pad_input:
x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x
class PatchEmbed(nn.Module):
""" Image to Patch Embedding
Args:
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
patch_size = to_2tuple(patch_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.embed_dim = embed_dim
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
if norm_layer is not None:
self.norm = norm_layer(embed_dim)
else:
self.norm = None
def forward(self, x):
"""Forward function."""
# padding
_, _, H, W = x.size()
if W % self.patch_size[1] != 0:
x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
if H % self.patch_size[0] != 0:
x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
x = self.proj(x) # B C Wh Ww
if self.norm is not None:
Wh, Ww = x.size(2), x.size(3)
x = x.flatten(2).transpose(1, 2)
x = self.norm(x)
x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
return x
class MultiModalSwinTransformer(nn.Module):
def __init__(self,
pretrain_img_size=224,
patch_size=4,
in_chans=3,
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.2,
norm_layer=nn.LayerNorm,
ape=False,
patch_norm=True,
out_indices=(0, 1, 2, 3),
frozen_stages=-1,
use_checkpoint=False,
num_heads_fusion=[1, 1, 1, 1],
fusion_drop=0.0
):
super().__init__()
self.pretrain_img_size = pretrain_img_size
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.ape = ape
self.patch_norm = patch_norm
self.out_indices = out_indices
self.frozen_stages = frozen_stages
# split image into non-overlapping patches
self.patch_embed = PatchEmbed(
patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
# absolute position embedding
if self.ape:
pretrain_img_size = to_2tuple(pretrain_img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
trunc_normal_(self.absolute_pos_embed, std=.02)
self.pos_drop = nn.Dropout(p=drop_rate)
# stochastic depth
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
# build layers
self.layers = nn.ModuleList()
for i_layer in range(self.num_layers):
layer = MMBasicLayer(
dim=int(embed_dim * 2 ** i_layer),
depth=depths[i_layer],
num_heads=num_heads[i_layer],
window_size=window_size,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
norm_layer=norm_layer,
downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
use_checkpoint=use_checkpoint,
num_heads_fusion=num_heads_fusion[i_layer],
fusion_drop=fusion_drop
)
self.layers.append(layer)
num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
self.num_features = num_features
# add a norm layer for each output
for i_layer in out_indices:
layer = norm_layer(num_features[i_layer])
layer_name = f'norm{i_layer}'
self.add_module(layer_name, layer)
self._freeze_stages()
def _freeze_stages(self):
if self.frozen_stages >= 0:
self.patch_embed.eval()
for param in self.patch_embed.parameters():
param.requires_grad = False
if self.frozen_stages >= 1 and self.ape:
self.absolute_pos_embed.requires_grad = False
if self.frozen_stages >= 2:
self.pos_drop.eval()
for i in range(0, self.frozen_stages - 1):
m = self.layers[i]
m.eval()
for param in m.parameters():
param.requires_grad = False
def init_weights(self, pretrained=None):
"""Initialize the weights in backbone.
Args:
pretrained (str, optional): Path to pre-trained weights.
Defaults to None.
"""
def _init_weights(m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
if isinstance(pretrained, str):
self.apply(_init_weights)
logger = get_root_logger()
load_checkpoint(self, pretrained, strict=('upernet' in pretrained), logger=logger)
elif pretrained is None:
self.apply(_init_weights)
else:
raise TypeError('pretrained must be a str or None')
def forward(self, x, l, l_mask):
"""Forward function."""
x = self.patch_embed(x)
Wh, Ww = x.size(2), x.size(3)
if self.ape:
# interpolate the position embedding to the corresponding size
absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
else:
x = x.flatten(2).transpose(1, 2)
x = self.pos_drop(x)
outs = []
for i in range(self.num_layers):
layer = self.layers[i]
x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww, l, l_mask)
if i in self.out_indices:
norm_layer = getattr(self, f'norm{i}')
x_out = norm_layer(x_out) # output of a Block has shape (B, H*W, dim)
out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
outs.append(out)
return tuple(outs)
def train(self, mode=True):
"""Convert the model into training mode while keep layers freezed."""
super(MultiModalSwinTransformer, self).train(mode)
self._freeze_stages()
class MMBasicLayer(nn.Module):
def __init__(self,
dim,
depth,
num_heads,
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop=0.,
attn_drop=0.,
drop_path=0.,
norm_layer=nn.LayerNorm,
downsample=None,
use_checkpoint=False,
num_heads_fusion=1,
fusion_drop=0.0
):
super().__init__()
self.window_size = window_size
self.shift_size = window_size // 2
self.depth = depth
self.use_checkpoint = use_checkpoint
self.dim = dim
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(
dim=dim,
num_heads=num_heads,
window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop,
attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# fuse before downsampling
self.fusion = PWAM(dim, # both the visual input and for combining, num of channels
dim, # v_in
768, # l_in
dim, # key
dim, # value
num_heads=num_heads_fusion,
dropout=fusion_drop)
self.res_gate = nn.Sequential(
nn.Linear(dim, dim, bias=False),
nn.ReLU(),
nn.Linear(dim, dim, bias=False),
nn.Tanh()
)
# patch merging layer
if downsample is not None:
self.downsample = downsample(dim=dim, norm_layer=norm_layer)
else:
self.downsample = None
def forward(self, x, H, W, l, l_mask):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
# calculate attention mask for SW-MSA
Hp = int(np.ceil(H / self.window_size)) * self.window_size
Wp = int(np.ceil(W / self.window_size)) * self.window_size
img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
h_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
w_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
for blk in self.blocks:
blk.H, blk.W = H, W
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x, attn_mask)
else:
x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
# PWAM fusion
x_residual = self.fusion(x, l, l_mask)
# apply a gate on the residual
x = x + (self.res_gate(x_residual) * x_residual)
if self.downsample is not None:
x_down = self.downsample(x, H, W)
Wh, Ww = (H + 1) // 2, (W + 1) // 2
return x_residual, H, W, x_down, Wh, Ww
else:
return x_residual, H, W, x, H, W
class PWAM(nn.Module):
def __init__(self, dim, v_in_channels, l_in_channels, key_channels, value_channels, num_heads=0, dropout=0.0):
super(PWAM, self).__init__()
# input x shape: (B, H*W, dim)
self.vis_project = nn.Sequential(nn.Conv1d(dim, dim, 1, 1), # the init function sets bias to 0 if bias is True
nn.GELU(),
nn.Dropout(dropout)
)
self.image_lang_att = SpatialImageLanguageAttention(v_in_channels, # v_in
l_in_channels, # l_in
key_channels, # key
value_channels, # value
out_channels=value_channels, # out
num_heads=num_heads)
self.project_mm = nn.Sequential(nn.Conv1d(value_channels, value_channels, 1, 1),
nn.GELU(),
nn.Dropout(dropout)
)
def forward(self, x, l, l_mask):
# input x shape: (B, H*W, dim)
vis = self.vis_project(x.permute(0, 2, 1)) # (B, dim, H*W)
lang = self.image_lang_att(x, l, l_mask) # (B, H*W, dim)
lang = lang.permute(0, 2, 1) # (B, dim, H*W)
mm = torch.mul(vis, lang)
mm = self.project_mm(mm) # (B, dim, H*W)
mm = mm.permute(0, 2, 1) # (B, H*W, dim)
return mm
class SpatialImageLanguageAttention(nn.Module):
def __init__(self, v_in_channels, l_in_channels, key_channels, value_channels, out_channels=None, num_heads=1):
super(SpatialImageLanguageAttention, self).__init__()
# x shape: (B, H*W, v_in_channels)
# l input shape: (B, l_in_channels, N_l)
# l_mask shape: (B, N_l, 1)
self.v_in_channels = v_in_channels
self.l_in_channels = l_in_channels
self.out_channels = out_channels
self.key_channels = key_channels
self.value_channels = value_channels
self.num_heads = num_heads
if out_channels is None:
self.out_channels = self.value_channels
# Keys: language features: (B, l_in_channels, #words)
# avoid any form of spatial normalization because a sentence contains many padding 0s
self.f_key = nn.Sequential(
nn.Conv1d(self.l_in_channels, self.key_channels, kernel_size=1, stride=1),
)
# Queries: visual features: (B, H*W, v_in_channels)
self.f_query = nn.Sequential(
nn.Conv1d(self.v_in_channels, self.key_channels, kernel_size=1, stride=1),
nn.InstanceNorm1d(self.key_channels),
)
# Values: language features: (B, l_in_channels, #words)
self.f_value = nn.Sequential(
nn.Conv1d(self.l_in_channels, self.value_channels, kernel_size=1, stride=1),
)
# Out projection
self.W = nn.Sequential(
nn.Conv1d(self.value_channels, self.out_channels, kernel_size=1, stride=1),
nn.InstanceNorm1d(self.out_channels),
)
def forward(self, x, l, l_mask):
# x shape: (B, H*W, v_in_channels)
# l input shape: (B, l_in_channels, N_l)
# l_mask shape: (B, N_l, 1)
B, HW = x.size(0), x.size(1)
x = x.permute(0, 2, 1) # (B, key_channels, H*W)
l_mask = l_mask.permute(0, 2, 1) # (B, N_l, 1) -> (B, 1, N_l)
query = self.f_query(x) # (B, key_channels, H*W) if Conv1D
query = query.permute(0, 2, 1) # (B, H*W, key_channels)
key = self.f_key(l) # (B, key_channels, N_l)
value = self.f_value(l) # (B, self.value_channels, N_l)
key = key * l_mask # (B, key_channels, N_l)
value = value * l_mask # (B, self.value_channels, N_l)
n_l = value.size(-1)
query = query.reshape(B, HW, self.num_heads, self.key_channels//self.num_heads).permute(0, 2, 1, 3)
# (b, num_heads, H*W, self.key_channels//self.num_heads)
key = key.reshape(B, self.num_heads, self.key_channels//self.num_heads, n_l)
# (b, num_heads, self.key_channels//self.num_heads, n_l)
value = value.reshape(B, self.num_heads, self.value_channels//self.num_heads, n_l)
# # (b, num_heads, self.value_channels//self.num_heads, n_l)
l_mask = l_mask.unsqueeze(1) # (b, 1, 1, n_l)
sim_map = torch.matmul(query, key) # (B, self.num_heads, H*W, N_l)
sim_map = (self.key_channels ** -.5) * sim_map # scaled dot product
sim_map = sim_map + (1e4*l_mask - 1e4) # assign a very small number to padding positions
sim_map = F.softmax(sim_map, dim=-1) # (B, num_heads, h*w, N_l)
out = torch.matmul(sim_map, value.permute(0, 1, 3, 2)) # (B, num_heads, H*W, self.value_channels//num_heads)
out = out.permute(0, 2, 1, 3).contiguous().reshape(B, HW, self.value_channels) # (B, H*W, value_channels)
out = out.permute(0, 2, 1) # (B, value_channels, HW)
out = self.W(out) # (B, value_channels, HW)
out = out.permute(0, 2, 1) # (B, HW, value_channels)
return out
================================================
FILE: lib/mask_predictor.py
================================================
import torch
from torch import nn
from torch.nn import functional as F
from collections import OrderedDict
class SimpleDecoding(nn.Module):
def __init__(self, c4_dims, factor=2):
super(SimpleDecoding, self).__init__()
hidden_size = c4_dims//factor
c4_size = c4_dims
c3_size = c4_dims//(factor**1)
c2_size = c4_dims//(factor**2)
c1_size = c4_dims//(factor**3)
self.conv1_4 = nn.Conv2d(c4_size+c3_size, hidden_size, 3, padding=1, bias=False)
self.bn1_4 = nn.BatchNorm2d(hidden_size)
self.relu1_4 = nn.ReLU()
self.conv2_4 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
self.bn2_4 = nn.BatchNorm2d(hidden_size)
self.relu2_4 = nn.ReLU()
self.conv1_3 = nn.Conv2d(hidden_size + c2_size, hidden_size, 3, padding=1, bias=False)
self.bn1_3 = nn.BatchNorm2d(hidden_size)
self.relu1_3 = nn.ReLU()
self.conv2_3 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
self.bn2_3 = nn.BatchNorm2d(hidden_size)
self.relu2_3 = nn.ReLU()
self.conv1_2 = nn.Conv2d(hidden_size + c1_size, hidden_size, 3, padding=1, bias=False)
self.bn1_2 = nn.BatchNorm2d(hidden_size)
self.relu1_2 = nn.ReLU()
self.conv2_2 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
self.bn2_2 = nn.BatchNorm2d(hidden_size)
self.relu2_2 = nn.ReLU()
self.conv1_1 = nn.Conv2d(hidden_size, 2, 1)
def forward(self, x_c4, x_c3, x_c2, x_c1):
# fuse Y4 and Y3
if x_c4.size(-2) < x_c3.size(-2) or x_c4.size(-1) < x_c3.size(-1):
x_c4 = F.interpolate(input=x_c4, size=(x_c3.size(-2), x_c3.size(-1)), mode='bilinear', align_corners=True)
x = torch.cat([x_c4, x_c3], dim=1)
x = self.conv1_4(x)
x = self.bn1_4(x)
x = self.relu1_4(x)
x = self.conv2_4(x)
x = self.bn2_4(x)
x = self.relu2_4(x)
# fuse top-down features and Y2 features
if x.size(-2) < x_c2.size(-2) or x.size(-1) < x_c2.size(-1):
x = F.interpolate(input=x, size=(x_c2.size(-2), x_c2.size(-1)), mode='bilinear', align_corners=True)
x = torch.cat([x, x_c2], dim=1)
x = self.conv1_3(x)
x = self.bn1_3(x)
x = self.relu1_3(x)
x = self.conv2_3(x)
x = self.bn2_3(x)
x = self.relu2_3(x)
# fuse top-down features and Y1 features
if x.size(-2) < x_c1.size(-2) or x.size(-1) < x_c1.size(-1):
x = F.interpolate(input=x, size=(x_c1.size(-2), x_c1.size(-1)), mode='bilinear', align_corners=True)
x = torch.cat([x, x_c1], dim=1)
x = self.conv1_2(x)
x = self.bn1_2(x)
x = self.relu1_2(x)
x = self.conv2_2(x)
x = self.bn2_2(x)
x = self.relu2_2(x)
return self.conv1_1(x)
================================================
FILE: lib/mmcv_custom/__init__.py
================================================
# -*- coding: utf-8 -*-
from .checkpoint import load_checkpoint
__all__ = ['load_checkpoint']
================================================
FILE: lib/mmcv_custom/checkpoint.py
================================================
# Copyright (c) Open-MMLab. All rights reserved.
import io
import os
import os.path as osp
import pkgutil
import time
import warnings
from collections import OrderedDict
from importlib import import_module
from tempfile import TemporaryDirectory
import torch
import torchvision
from torch.optim import Optimizer
from torch.utils import model_zoo
from torch.nn import functional as F
import mmcv
from mmcv.fileio import FileClient
from mmcv.fileio import load as load_file
from mmcv.parallel import is_module_wrapper
from mmcv.utils import mkdir_or_exist
from mmcv.runner import get_dist_info
ENV_MMCV_HOME = 'MMCV_HOME'
ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME'
DEFAULT_CACHE_DIR = '~/.cache'
def _get_mmcv_home():
mmcv_home = os.path.expanduser(
os.getenv(
ENV_MMCV_HOME,
os.path.join(
os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv')))
mkdir_or_exist(mmcv_home)
return mmcv_home
def load_state_dict(module, state_dict, strict=False, logger=None):
"""Load state_dict to a module.
This method is modified from :meth:`torch.nn.Module.load_state_dict`.
Default value for ``strict`` is set to ``False`` and the message for
param mismatch will NOT be shown if strict is False.
Args:
module (Module): Module that receives the state_dict.
state_dict (OrderedDict): Weights.
strict (bool): whether to strictly enforce that the keys
in :attr:`state_dict` match the keys returned by this module's
:meth:`~torch.nn.Module.state_dict` function. Default: ``False``.
logger (:obj:`logging.Logger`, optional): Logger to log the error
message. If not specified, print function will be used.
"""
unexpected_keys = []
all_missing_keys = []
err_msg = []
metadata = getattr(state_dict, '_metadata', None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
# use _load_from_state_dict to enable checkpoint version control
def load(module, prefix=''):
# recursively check parallel module in case that the model has a
# complicated structure, e.g., nn.Module(nn.Module(DDP))
if is_module_wrapper(module):
module = module.module
local_metadata = {} if metadata is None else metadata.get(
prefix[:-1], {})
module._load_from_state_dict(state_dict, prefix, local_metadata, True,
all_missing_keys, unexpected_keys,
err_msg)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + '.')
load(module)
load = None # break load->load reference cycle
# ignore "num_batches_tracked" of BN layers
missing_keys = [
key for key in all_missing_keys if 'num_batches_tracked' not in key
]
if unexpected_keys:
err_msg.append('unexpected key in source '
f'state_dict: {", ".join(unexpected_keys)}\n')
if missing_keys:
err_msg.append(
f'missing keys in source state_dict: {", ".join(missing_keys)}\n')
if strict:
rank, _ = get_dist_info()
if len(err_msg) > 0 and rank == 0:
err_msg.insert(
0, 'The model and loaded state dict do not match exactly\n')
err_msg = '\n'.join(err_msg)
if strict:
raise RuntimeError(err_msg)
elif logger is not None:
logger.warning(err_msg)
else:
print(err_msg)
def load_url_dist(url, model_dir=None):
"""In distributed setting, this function only download checkpoint at local
rank 0."""
rank, world_size = get_dist_info()
rank = int(os.environ.get('LOCAL_RANK', rank))
if rank == 0:
checkpoint = model_zoo.load_url(url, model_dir=model_dir)
if world_size > 1:
torch.distributed.barrier()
if rank > 0:
checkpoint = model_zoo.load_url(url, model_dir=model_dir)
return checkpoint
def load_pavimodel_dist(model_path, map_location=None):
"""In distributed setting, this function only download checkpoint at local
rank 0."""
try:
from pavi import modelcloud
except ImportError:
raise ImportError(
'Please install pavi to load checkpoint from modelcloud.')
rank, world_size = get_dist_info()
rank = int(os.environ.get('LOCAL_RANK', rank))
if rank == 0:
model = modelcloud.get(model_path)
with TemporaryDirectory() as tmp_dir:
downloaded_file = osp.join(tmp_dir, model.name)
model.download(downloaded_file)
checkpoint = torch.load(downloaded_file, map_location=map_location)
if world_size > 1:
torch.distributed.barrier()
if rank > 0:
model = modelcloud.get(model_path)
with TemporaryDirectory() as tmp_dir:
downloaded_file = osp.join(tmp_dir, model.name)
model.download(downloaded_file)
checkpoint = torch.load(
downloaded_file, map_location=map_location)
return checkpoint
def load_fileclient_dist(filename, backend, map_location):
"""In distributed setting, this function only download checkpoint at local
rank 0."""
rank, world_size = get_dist_info()
rank = int(os.environ.get('LOCAL_RANK', rank))
allowed_backends = ['ceph']
if backend not in allowed_backends:
raise ValueError(f'Load from Backend {backend} is not supported.')
if rank == 0:
fileclient = FileClient(backend=backend)
buffer = io.BytesIO(fileclient.get(filename))
checkpoint = torch.load(buffer, map_location=map_location)
if world_size > 1:
torch.distributed.barrier()
if rank > 0:
fileclient = FileClient(backend=backend)
buffer = io.BytesIO(fileclient.get(filename))
checkpoint = torch.load(buffer, map_location=map_location)
return checkpoint
def get_torchvision_models():
model_urls = dict()
for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__):
if ispkg:
continue
_zoo = import_module(f'torchvision.models.{name}')
if hasattr(_zoo, 'model_urls'):
_urls = getattr(_zoo, 'model_urls')
model_urls.update(_urls)
return model_urls
def get_external_models():
mmcv_home = _get_mmcv_home()
default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json')
default_urls = load_file(default_json_path)
assert isinstance(default_urls, dict)
external_json_path = osp.join(mmcv_home, 'open_mmlab.json')
if osp.exists(external_json_path):
external_urls = load_file(external_json_path)
assert isinstance(external_urls, dict)
default_urls.update(external_urls)
return default_urls
def get_mmcls_models():
mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json')
mmcls_urls = load_file(mmcls_json_path)
return mmcls_urls
def get_deprecated_model_names():
deprecate_json_path = osp.join(mmcv.__path__[0],
'model_zoo/deprecated.json')
deprecate_urls = load_file(deprecate_json_path)
assert isinstance(deprecate_urls, dict)
return deprecate_urls
def _process_mmcls_checkpoint(checkpoint):
state_dict = checkpoint['state_dict']
new_state_dict = OrderedDict()
for k, v in state_dict.items():
if k.startswith('backbone.'):
new_state_dict[k[9:]] = v
new_checkpoint = dict(state_dict=new_state_dict)
return new_checkpoint
def _load_checkpoint(filename, map_location=None):
"""Load checkpoint from somewhere (modelzoo, file, url).
Args:
filename (str): Accept local filepath, URL, ``torchvision://xxx``,
``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
details.
map_location (str | None): Same as :func:`torch.load`. Default: None.
Returns:
dict | OrderedDict: The loaded checkpoint. It can be either an
OrderedDict storing model weights or a dict containing other
information, which depends on the checkpoint.
"""
if filename.startswith('modelzoo://'):
warnings.warn('The URL scheme of "modelzoo://" is deprecated, please '
'use "torchvision://" instead')
model_urls = get_torchvision_models()
model_name = filename[11:]
checkpoint = load_url_dist(model_urls[model_name])
elif filename.startswith('torchvision://'):
model_urls = get_torchvision_models()
model_name = filename[14:]
checkpoint = load_url_dist(model_urls[model_name])
elif filename.startswith('open-mmlab://'):
model_urls = get_external_models()
model_name = filename[13:]
deprecated_urls = get_deprecated_model_names()
if model_name in deprecated_urls:
warnings.warn(f'open-mmlab://{model_name} is deprecated in favor '
f'of open-mmlab://{deprecated_urls[model_name]}')
model_name = deprecated_urls[model_name]
model_url = model_urls[model_name]
# check if is url
if model_url.startswith(('http://', 'https://')):
checkpoint = load_url_dist(model_url)
else:
filename = osp.join(_get_mmcv_home(), model_url)
if not osp.isfile(filename):
raise IOError(f'{filename} is not a checkpoint file')
checkpoint = torch.load(filename, map_location=map_location)
elif filename.startswith('mmcls://'):
model_urls = get_mmcls_models()
model_name = filename[8:]
checkpoint = load_url_dist(model_urls[model_name])
checkpoint = _process_mmcls_checkpoint(checkpoint)
elif filename.startswith(('http://', 'https://')):
checkpoint = load_url_dist(filename)
elif filename.startswith('pavi://'):
model_path = filename[7:]
checkpoint = load_pavimodel_dist(model_path, map_location=map_location)
elif filename.startswith('s3://'):
checkpoint = load_fileclient_dist(
filename, backend='ceph', map_location=map_location)
else:
if not osp.isfile(filename):
raise IOError(f'{filename} is not a checkpoint file')
checkpoint = torch.load(filename, map_location=map_location)
return checkpoint
def load_checkpoint(model,
filename,
map_location='cpu',
strict=False,
logger=None):
"""Load checkpoint from a file or URI.
Args:
model (Module): Module to load checkpoint.
filename (str): Accept local filepath, URL, ``torchvision://xxx``,
``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
details.
map_location (str): Same as :func:`torch.load`.
strict (bool): Whether to allow different params for the model and
checkpoint.
logger (:mod:`logging.Logger` or None): The logger for error message.
Returns:
dict or OrderedDict: The loaded checkpoint.
"""
checkpoint = _load_checkpoint(filename, map_location)
# OrderedDict is a subclass of dict
if not isinstance(checkpoint, dict):
raise RuntimeError(
f'No state_dict found in checkpoint file {filename}')
# get state_dict from checkpoint
if 'state_dict' in checkpoint:
state_dict = checkpoint['state_dict']
elif 'model' in checkpoint:
state_dict = checkpoint['model']
else:
state_dict = checkpoint
# strip prefix of state_dict
if list(state_dict.keys())[0].startswith('module.'):
state_dict = {k[7:]: v for k, v in state_dict.items()}
# for upper net weights only
if list(state_dict.keys())[0].startswith('backbone.'):
print('Start stripping upper net pre-fix and loading backbone weights to our swin encoder')
state_dict = {k.replace('backbone.', ''): v for k, v in state_dict.items() if k.startswith('backbone.')}
# for MoBY, load model of online branch
if sorted(list(state_dict.keys()))[0].startswith('encoder'):
state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')}
# reshape absolute position embedding
if state_dict.get('absolute_pos_embed') is not None:
absolute_pos_embed = state_dict['absolute_pos_embed']
N1, L, C1 = absolute_pos_embed.size()
N2, C2, H, W = model.absolute_pos_embed.size()
if N1 != N2 or C1 != C2 or L != H*W:
logger.warning("Error in loading absolute_pos_embed, pass")
else:
state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2)
# interpolate position bias table if needed
relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k]
for table_key in relative_position_bias_table_keys:
table_pretrained = state_dict[table_key]
table_current = model.state_dict()[table_key]
L1, nH1 = table_pretrained.size()
L2, nH2 = table_current.size()
if nH1 != nH2:
logger.warning(f"Error in loading {table_key}, pass")
else:
if L1 != L2:
S1 = int(L1 ** 0.5)
S2 = int(L2 ** 0.5)
table_pretrained_resized = F.interpolate(
table_pretrained.permute(1, 0).view(1, nH1, S1, S1),
size=(S2, S2), mode='bicubic')
state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0)
# load state_dict
load_state_dict(model, state_dict, strict, logger)
return checkpoint
def weights_to_cpu(state_dict):
"""Copy a model state_dict to cpu.
Args:
state_dict (OrderedDict): Model weights on GPU.
Returns:
OrderedDict: Model weights on GPU.
"""
state_dict_cpu = OrderedDict()
for key, val in state_dict.items():
state_dict_cpu[key] = val.cpu()
return state_dict_cpu
def _save_to_state_dict(module, destination, prefix, keep_vars):
"""Saves module state to `destination` dictionary.
This method is modified from :meth:`torch.nn.Module._save_to_state_dict`.
Args:
module (nn.Module): The module to generate state_dict.
destination (dict): A dict where state will be stored.
prefix (str): The prefix for parameters and buffers used in this
module.
"""
for name, param in module._parameters.items():
if param is not None:
destination[prefix + name] = param if keep_vars else param.detach()
for name, buf in module._buffers.items():
# remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d
if buf is not None:
destination[prefix + name] = buf if keep_vars else buf.detach()
def get_state_dict(module, destination=None, prefix='', keep_vars=False):
"""Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are
included. Keys are corresponding parameter and buffer names.
This method is modified from :meth:`torch.nn.Module.state_dict` to
recursively check parallel module in case that the model has a complicated
structure, e.g., nn.Module(nn.Module(DDP)).
Args:
module (nn.Module): The module to generate state_dict.
destination (OrderedDict): Returned dict for the state of the
module.
prefix (str): Prefix of the key.
keep_vars (bool): Whether to keep the variable property of the
parameters. Default: False.
Returns:
dict: A dictionary containing a whole state of the module.
"""
# recursively check parallel module in case that the model has a
# complicated structure, e.g., nn.Module(nn.Module(DDP))
if is_module_wrapper(module):
module = module.module
# below is the same as torch.nn.Module.state_dict()
if destination is None:
destination = OrderedDict()
destination._metadata = OrderedDict()
destination._metadata[prefix[:-1]] = local_metadata = dict(
version=module._version)
_save_to_state_dict(module, destination, prefix, keep_vars)
for name, child in module._modules.items():
if child is not None:
get_state_dict(
child, destination, prefix + name + '.', keep_vars=keep_vars)
for hook in module._state_dict_hooks.values():
hook_result = hook(module, destination, prefix, local_metadata)
if hook_result is not None:
destination = hook_result
return destination
def save_checkpoint(model, filename, optimizer=None, meta=None):
"""Save checkpoint to file.
The checkpoint will have 3 fields: ``meta``, ``state_dict`` and
``optimizer``. By default ``meta`` will contain version and time info.
Args:
model (Module): Module whose params are to be saved.
filename (str): Checkpoint filename.
optimizer (:obj:`Optimizer`, optional): Optimizer to be saved.
meta (dict, optional): Metadata to be saved in checkpoint.
"""
if meta is None:
meta = {}
elif not isinstance(meta, dict):
raise TypeError(f'meta must be a dict or None, but got {type(meta)}')
meta.update(mmcv_version=mmcv.__version__, time=time.asctime())
if is_module_wrapper(model):
model = model.module
if hasattr(model, 'CLASSES') and model.CLASSES is not None:
# save class name to the meta
meta.update(CLASSES=model.CLASSES)
checkpoint = {
'meta': meta,
'state_dict': weights_to_cpu(get_state_dict(model))
}
# save optimizer state dict in the checkpoint
if isinstance(optimizer, Optimizer):
checkpoint['optimizer'] = optimizer.state_dict()
elif isinstance(optimizer, dict):
checkpoint['optimizer'] = {}
for name, optim in optimizer.items():
checkpoint['optimizer'][name] = optim.state_dict()
if filename.startswith('pavi://'):
try:
from pavi import modelcloud
from pavi.exception import NodeNotFoundError
except ImportError:
raise ImportError(
'Please install pavi to load checkpoint from modelcloud.')
model_path = filename[7:]
root = modelcloud.Folder()
model_dir, model_name = osp.split(model_path)
try:
model = modelcloud.get(model_dir)
except NodeNotFoundError:
model = root.create_training_model(model_dir)
with TemporaryDirectory() as tmp_dir:
checkpoint_file = osp.join(tmp_dir, model_name)
with open(checkpoint_file, 'wb') as f:
torch.save(checkpoint, f)
f.flush()
model.create_file(checkpoint_file, name=model_name)
else:
mmcv.mkdir_or_exist(osp.dirname(filename))
# immediately flush buffer
with open(filename, 'wb') as f:
torch.save(checkpoint, f)
f.flush()
================================================
FILE: lib/segmentation.py
================================================
import torch
import torch.nn as nn
from .mask_predictor import SimpleDecoding
from .backbone import MultiModalSwinTransformer
from ._utils import LAVT, LAVTOne
__all__ = ['lavt', 'lavt_one']
# LAVT
def _segm_lavt(pretrained, args):
# initialize the SwinTransformer backbone with the specified version
if args.swin_type == 'tiny':
embed_dim = 96
depths = [2, 2, 6, 2]
num_heads = [3, 6, 12, 24]
elif args.swin_type == 'small':
embed_dim = 96
depths = [2, 2, 18, 2]
num_heads = [3, 6, 12, 24]
elif args.swin_type == 'base':
embed_dim = 128
depths = [2, 2, 18, 2]
num_heads = [4, 8, 16, 32]
elif args.swin_type == 'large':
embed_dim = 192
depths = [2, 2, 18, 2]
num_heads = [6, 12, 24, 48]
else:
assert False
# args.window12 added for test.py because state_dict is loaded after model initialization
if 'window12' in pretrained or args.window12:
print('Window size 12!')
window_size = 12
else:
window_size = 7
if args.mha:
mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
mha = [int(a) for a in mha]
else:
mha = [1, 1, 1, 1]
out_indices = (0, 1, 2, 3)
backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
window_size=window_size,
ape=False, drop_path_rate=0.3, patch_norm=True,
out_indices=out_indices,
use_checkpoint=False, num_heads_fusion=mha,
fusion_drop=args.fusion_drop
)
if pretrained:
print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
backbone.init_weights(pretrained=pretrained)
else:
print('Randomly initialize Multi-modal Swin Transformer weights.')
backbone.init_weights()
model_map = [SimpleDecoding, LAVT]
classifier = model_map[0](8*embed_dim)
base_model = model_map[1]
model = base_model(backbone, classifier)
return model
def _load_model_lavt(pretrained, args):
model = _segm_lavt(pretrained, args)
return model
def lavt(pretrained='', args=None):
return _load_model_lavt(pretrained, args)
###############################################
# LAVT One: put BERT inside the overall model #
###############################################
def _segm_lavt_one(pretrained, args):
# initialize the SwinTransformer backbone with the specified version
if args.swin_type == 'tiny':
embed_dim = 96
depths = [2, 2, 6, 2]
num_heads = [3, 6, 12, 24]
elif args.swin_type == 'small':
embed_dim = 96
depths = [2, 2, 18, 2]
num_heads = [3, 6, 12, 24]
elif args.swin_type == 'base':
embed_dim = 128
depths = [2, 2, 18, 2]
num_heads = [4, 8, 16, 32]
elif args.swin_type == 'large':
embed_dim = 192
depths = [2, 2, 18, 2]
num_heads = [6, 12, 24, 48]
else:
assert False
# args.window12 added for test.py because state_dict is loaded after model initialization
if 'window12' in pretrained or args.window12:
print('Window size 12!')
window_size = 12
else:
window_size = 7
if args.mha:
mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
mha = [int(a) for a in mha]
else:
mha = [1, 1, 1, 1]
out_indices = (0, 1, 2, 3)
backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
window_size=window_size,
ape=False, drop_path_rate=0.3, patch_norm=True,
out_indices=out_indices,
use_checkpoint=False, num_heads_fusion=mha,
fusion_drop=args.fusion_drop
)
if pretrained:
print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
backbone.init_weights(pretrained=pretrained)
else:
print('Randomly initialize Multi-modal Swin Transformer weights.')
backbone.init_weights()
model_map = [SimpleDecoding, LAVTOne]
classifier = model_map[0](8*embed_dim)
base_model = model_map[1]
model = base_model(backbone, classifier, args)
return model
def _load_model_lavt_one(pretrained, args):
model = _segm_lavt_one(pretrained, args)
return model
def lavt_one(pretrained='', args=None):
return _load_model_lavt_one(pretrained, args)
================================================
FILE: refer/LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: refer/Makefile
================================================
all:
# install pycocotools/mask locally
# copy from https://github.com/pdollar/coco.git
python setup.py build_ext --inplace
rm -rf build
================================================
FILE: refer/README.md
================================================
## Note
This API is able to load all 4 referring expression datasets, i.e., RefClef, RefCOCO, RefCOCO+ and RefCOCOg.
They are with different train/val/test split by UNC, Google and UC Berkeley respectively. We provide all kinds of splits here.
 |
## Citation
If you used the following three datasets RefClef, RefCOCO and RefCOCO+ that were collected by UNC, please consider cite our EMNLP2014 paper; if you want to compare with our recent results, please check our ECCV2016 paper.
```bash
Kazemzadeh, Sahar, et al. "ReferItGame: Referring to Objects in Photographs of Natural Scenes." EMNLP 2014.
Yu, Licheng, et al. "Modeling Context in Referring Expressions." ECCV 2016.
```
## Setup
Run "make" before using the code.
It will generate ``_mask.c`` and ``_mask.so`` in ``external/`` folder.
These mask-related codes are copied from mscoco [API](https://github.com/pdollar/coco).
## Download
Download the cleaned data and extract them into "data" folder
- 1) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refclef.zip
- 2) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip
- 3) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip
- 4) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip
## Prepare Images:
Besides, add "mscoco" into the ``data/images`` folder, which can be from [mscoco](http://mscoco.org/dataset/#overview)
COCO's images are used for RefCOCO, RefCOCO+ and refCOCOg.
For RefCLEF, please add ``saiapr_tc-12`` into ``data/images`` folder. We extracted the related 19997 images to the cleaned RefCLEF dataset, which is a subset of the original [imageCLEF](http://imageclef.org/SIAPRdata). Download the [subset](http://bvisionweb1.cs.unc.edu/licheng/referit/data/images/saiapr_tc-12.zip) and unzip it to ``data/images/saiapr_tc-12``.
## How to use
The "refer.py" is able to load all 4 datasets with different kinds of data split by UNC, Google, UMD and UC Berkeley.
**Note for RefCOCOg, we suggest use UMD's split which has train/val/test splits and there is no overlap of images between different split.**
```bash
# locate your own data_root, and choose the dataset_splitBy you want to use
refer = REFER(data_root, dataset='refclef', splitBy='unc')
refer = REFER(data_root, dataset='refclef', splitBy='berkeley') # 2 train and 1 test images missed
refer = REFER(data_root, dataset='refcoco', splitBy='unc')
refer = REFER(data_root, dataset='refcoco', splitBy='google')
refer = REFER(data_root, dataset='refcoco+', splitBy='unc')
refer = REFER(data_root, dataset='refcocog', splitBy='google') # test split not released yet
refer = REFER(data_root, dataset='refcocog', splitBy='umd') # Recommended, including train/val/test
```
================================================
FILE: refer/data/README.md
================================================
This directory should contain the following data:
```
$DATA_PATH
├── images
│ ├── mscoco
│ └── saiaprtc12
├── refcoco
│ ├── instances.json
│ ├── refs(google).p
│ └── refs(unc).p
├── refcoco+
│ ├── instances.json
│ └── refs(unc).p
├── refcocog
│ ├── instances.json
│ └── refs(google).p
└── refclef
├── instances.json
├── refs(unc).p
└── refs(berkeley).p
```
Note, each detections/xxx.json contains
``{'dets': ['box': {x, y, w, h}, 'image_id', 'object_id', 'score']}``. The ``object_id`` and ``score`` might be missing, depending on what proposal/detection technique we are using.
## Download
Download my cleaned data and extract them into this folder.
- 1) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refclef.zip
- 2) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip
- 3) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip
- 4) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip
Besides make a folder named as "images".
Add "mscoco" into "images/".
Download MSCOCO from [mscoco](http://mscoco.org/dataset/#overview)
Add "saiapr_tc-12" into "images/". I only extracted the related images as a subset of the original [imageCLEF](http://imageclef.org/SIAPRdata), i.e., 19997 images. Please download the subset from here (http://bvisionweb1.cs.unc.edu/licheng/referit/data/images/saiapr_tc-12.zip).
================================================
FILE: refer/evaluation/__init__.py
================================================
__author__ = 'licheng'
================================================
FILE: refer/evaluation/bleu/LICENSE
================================================
Copyright (c) 2015 Xinlei Chen, Hao Fang, Tsung-Yi Lin, and Ramakrishna Vedantam
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
================================================
FILE: refer/evaluation/bleu/__init__.py
================================================
__author__ = 'tylin'
================================================
FILE: refer/evaluation/bleu/bleu.py
================================================
#!/usr/bin/env python
#
# File Name : bleu.py
#
# Description : Wrapper for BLEU scorer.
#
# Creation Date : 06-01-2015
# Last Modified : Thu 19 Mar 2015 09:13:28 PM PDT
# Authors : Hao Fang and Tsung-Yi Lin
from bleu_scorer import BleuScorer
class Bleu:
def __init__(self, n=4):
# default compute Blue score up to 4
self._n = n
self._hypo_for_image = {}
self.ref_for_image = {}
def compute_score(self, gts, res):
assert(gts.keys() == res.keys())
imgIds = gts.keys()
bleu_scorer = BleuScorer(n=self._n)
for id in imgIds:
hypo = res[id]
ref = gts[id]
# Sanity check.
assert(type(hypo) is list)
assert(len(hypo) == 1)
assert(type(ref) is list)
assert(len(ref) >= 1)
bleu_scorer += (hypo[0], ref)
#score, scores = bleu_scorer.compute_score(option='shortest')
score, scores = bleu_scorer.compute_score(option='closest', verbose=1)
#score, scores = bleu_scorer.compute_score(option='average', verbose=1)
# return (bleu, bleu_info)
return score, scores
def method(self):
return "Bleu"
================================================
FILE: refer/evaluation/bleu/bleu_scorer.py
================================================
#!/usr/bin/env python
# bleu_scorer.py
# David Chiang
# Copyright (c) 2004-2006 University of Maryland. All rights
# reserved. Do not redistribute without permission from the
# author. Not for commercial use.
# Modified by:
# Hao Fang
# Tsung-Yi Lin
'''Provides:
cook_refs(refs, n=4): Transform a list of reference sentences as strings into a form usable by cook_test().
cook_test(test, refs, n=4): Transform a test sentence as a string (together with the cooked reference sentences) into a form usable by score_cooked().
'''
import copy
import sys, math, re
from collections import defaultdict
def precook(s, n=4, out=False):
"""Takes a string as input and returns an object that can be given to
either cook_refs or cook_test. This is optional: cook_refs and cook_test
can take string arguments as well."""
words = s.split()
counts = defaultdict(int)
for k in xrange(1,n+1):
for i in xrange(len(words)-k+1):
ngram = tuple(words[i:i+k])
counts[ngram] += 1
return (len(words), counts)
def cook_refs(refs, eff=None, n=4): ## lhuang: oracle will call with "average"
'''Takes a list of reference sentences for a single segment
and returns an object that encapsulates everything that BLEU
needs to know about them.'''
reflen = []
maxcounts = {}
for ref in refs:
rl, counts = precook(ref, n)
reflen.append(rl)
for (ngram,count) in counts.iteritems():
maxcounts[ngram] = max(maxcounts.get(ngram,0), count)
# Calculate effective reference sentence length.
if eff == "shortest":
reflen = min(reflen)
elif eff == "average":
reflen = float(sum(reflen))/len(reflen)
## lhuang: N.B.: leave reflen computaiton to the very end!!
## lhuang: N.B.: in case of "closest", keep a list of reflens!! (bad design)
return (reflen, maxcounts)
def cook_test(test, (reflen, refmaxcounts), eff=None, n=4):
'''Takes a test sentence and returns an object that
encapsulates everything that BLEU needs to know about it.'''
testlen, counts = precook(test, n, True)
result = {}
# Calculate effective reference sentence length.
if eff == "closest":
result["reflen"] = min((abs(l-testlen), l) for l in reflen)[1]
else: ## i.e., "average" or "shortest" or None
result["reflen"] = reflen
result["testlen"] = testlen
result["guess"] = [max(0,testlen-k+1) for k in xrange(1,n+1)]
result['correct'] = [0]*n
for (ngram, count) in counts.iteritems():
result["correct"][len(ngram)-1] += min(refmaxcounts.get(ngram,0), count)
return result
class BleuScorer(object):
"""Bleu scorer.
"""
__slots__ = "n", "crefs", "ctest", "_score", "_ratio", "_testlen", "_reflen", "special_reflen"
# special_reflen is used in oracle (proportional effective ref len for a node).
def copy(self):
''' copy the refs.'''
new = BleuScorer(n=self.n)
new.ctest = copy.copy(self.ctest)
new.crefs = copy.copy(self.crefs)
new._score = None
return new
def __init__(self, test=None, refs=None, n=4, special_reflen=None):
''' singular instance '''
self.n = n
self.crefs = []
self.ctest = []
self.cook_append(test, refs)
self.special_reflen = special_reflen
def cook_append(self, test, refs):
'''called by constructor and __iadd__ to avoid creating new instances.'''
if refs is not None:
self.crefs.append(cook_refs(refs))
if test is not None:
cooked_test = cook_test(test, self.crefs[-1])
self.ctest.append(cooked_test) ## N.B.: -1
else:
self.ctest.append(None) # lens of crefs and ctest have to match
self._score = None ## need to recompute
def ratio(self, option=None):
self.compute_score(option=option)
return self._ratio
def score_ratio(self, option=None):
'''return (bleu, len_ratio) pair'''
return (self.fscore(option=option), self.ratio(option=option))
def score_ratio_str(self, option=None):
return "%.4f (%.2f)" % self.score_ratio(option)
def reflen(self, option=None):
self.compute_score(option=option)
return self._reflen
def testlen(self, option=None):
self.compute_score(option=option)
return self._testlen
def retest(self, new_test):
if type(new_test) is str:
new_test = [new_test]
assert len(new_test) == len(self.crefs), new_test
self.ctest = []
for t, rs in zip(new_test, self.crefs):
self.ctest.append(cook_test(t, rs))
self._score = None
return self
def rescore(self, new_test):
''' replace test(s) with new test(s), and returns the new score.'''
return self.retest(new_test).compute_score()
def size(self):
assert len(self.crefs) == len(self.ctest), "refs/test mismatch! %d<>%d" % (len(self.crefs), len(self.ctest))
return len(self.crefs)
def __iadd__(self, other):
'''add an instance (e.g., from another sentence).'''
if type(other) is tuple:
## avoid creating new BleuScorer instances
self.cook_append(other[0], other[1])
else:
assert self.compatible(other), "incompatible BLEUs."
self.ctest.extend(other.ctest)
self.crefs.extend(other.crefs)
self._score = None ## need to recompute
return self
def compatible(self, other):
return isinstance(other, BleuScorer) and self.n == other.n
def single_reflen(self, option="average"):
return self._single_reflen(self.crefs[0][0], option)
def _single_reflen(self, reflens, option=None, testlen=None):
if option == "shortest":
reflen = min(reflens)
elif option == "average":
reflen = float(sum(reflens))/len(reflens)
elif option == "closest":
reflen = min((abs(l-testlen), l) for l in reflens)[1]
else:
assert False, "unsupported reflen option %s" % option
return reflen
def recompute_score(self, option=None, verbose=0):
self._score = None
return self.compute_score(option, verbose)
def compute_score(self, option=None, verbose=0):
n = self.n
small = 1e-9
tiny = 1e-15 ## so that if guess is 0 still return 0
bleu_list = [[] for _ in range(n)]
if self._score is not None:
return self._score
if option is None:
option = "average" if len(self.crefs) == 1 else "closest"
self._testlen = 0
self._reflen = 0
totalcomps = {'testlen':0, 'reflen':0, 'guess':[0]*n, 'correct':[0]*n}
# for each sentence
for comps in self.ctest:
testlen = comps['testlen']
self._testlen += testlen
if self.special_reflen is None: ## need computation
reflen = self._single_reflen(comps['reflen'], option, testlen)
else:
reflen = self.special_reflen
self._reflen += reflen
for key in ['guess','correct']:
for k in xrange(n):
totalcomps[key][k] += comps[key][k]
# append per image bleu score
bleu = 1.
for k in xrange(n):
bleu *= (float(comps['correct'][k]) + tiny) \
/(float(comps['guess'][k]) + small)
bleu_list[k].append(bleu ** (1./(k+1)))
ratio = (testlen + tiny) / (reflen + small) ## N.B.: avoid zero division
if ratio < 1:
for k in xrange(n):
bleu_list[k][-1] *= math.exp(1 - 1/ratio)
if verbose > 1:
print comps, reflen
totalcomps['reflen'] = self._reflen
totalcomps['testlen'] = self._testlen
bleus = []
bleu = 1.
for k in xrange(n):
bleu *= float(totalcomps['correct'][k] + tiny) \
/ (totalcomps['guess'][k] + small)
bleus.append(bleu ** (1./(k+1)))
ratio = (self._testlen + tiny) / (self._reflen + small) ## N.B.: avoid zero division
if ratio < 1:
for k in xrange(n):
bleus[k] *= math.exp(1 - 1/ratio)
if verbose > 0:
print totalcomps
print "ratio:", ratio
self._score = bleus
return self._score, bleu_list
================================================
FILE: refer/evaluation/cider/__init__.py
================================================
__author__ = 'tylin'
================================================
FILE: refer/evaluation/cider/cider.py
================================================
# Filename: cider.py
#
# Description: Describes the class to compute the CIDEr (Consensus-Based Image Description Evaluation) Metric
# by Vedantam, Zitnick, and Parikh (http://arxiv.org/abs/1411.5726)
#
# Creation Date: Sun Feb 8 14:16:54 2015
#
# Authors: Ramakrishna Vedantam and Tsung-Yi Lin
from cider_scorer import CiderScorer
import pdb
class Cider:
"""
Main Class to compute the CIDEr metric
"""
def __init__(self, test=None, refs=None, n=4, sigma=6.0):
# set cider to sum over 1 to 4-grams
self._n = n
# set the standard deviation parameter for gaussian penalty
self._sigma = sigma
def compute_score(self, gts, res):
"""
Main function to compute CIDEr score
:param hypo_for_image (dict) : dictionary with key and value
ref_for_image (dict) : dictionary with key and value
:return: cider (float) : computed CIDEr score for the corpus
"""
assert(gts.keys() == res.keys())
imgIds = gts.keys()
cider_scorer = CiderScorer(n=self._n, sigma=self._sigma)
for id in imgIds:
hypo = res[id]
ref = gts[id]
# Sanity check.
assert(type(hypo) is list)
assert(len(hypo) == 1)
assert(type(ref) is list)
assert(len(ref) > 0)
cider_scorer += (hypo[0], ref)
(score, scores) = cider_scorer.compute_score()
return score, scores
def method(self):
return "CIDEr"
================================================
FILE: refer/evaluation/cider/cider_scorer.py
================================================
#!/usr/bin/env python
# Tsung-Yi Lin
# Ramakrishna Vedantam
import copy
from collections import defaultdict
import numpy as np
import pdb
import math
def precook(s, n=4, out=False):
"""
Takes a string as input and returns an object that can be given to
either cook_refs or cook_test. This is optional: cook_refs and cook_test
can take string arguments as well.
:param s: string : sentence to be converted into ngrams
:param n: int : number of ngrams for which representation is calculated
:return: term frequency vector for occuring ngrams
"""
words = s.split()
counts = defaultdict(int)
for k in xrange(1,n+1):
for i in xrange(len(words)-k+1):
ngram = tuple(words[i:i+k])
counts[ngram] += 1
return counts
def cook_refs(refs, n=4): ## lhuang: oracle will call with "average"
'''Takes a list of reference sentences for a single segment
and returns an object that encapsulates everything that BLEU
needs to know about them.
:param refs: list of string : reference sentences for some image
:param n: int : number of ngrams for which (ngram) representation is calculated
:return: result (list of dict)
'''
return [precook(ref, n) for ref in refs]
def cook_test(test, n=4):
'''Takes a test sentence and returns an object that
encapsulates everything that BLEU needs to know about it.
:param test: list of string : hypothesis sentence for some image
:param n: int : number of ngrams for which (ngram) representation is calculated
:return: result (dict)
'''
return precook(test, n, True)
class CiderScorer(object):
"""CIDEr scorer.
"""
def copy(self):
''' copy the refs.'''
new = CiderScorer(n=self.n)
new.ctest = copy.copy(self.ctest)
new.crefs = copy.copy(self.crefs)
return new
def __init__(self, test=None, refs=None, n=4, sigma=6.0):
''' singular instance '''
self.n = n
self.sigma = sigma
self.crefs = []
self.ctest = []
self.document_frequency = defaultdict(float)
self.cook_append(test, refs)
self.ref_len = None
def cook_append(self, test, refs):
'''called by constructor and __iadd__ to avoid creating new instances.'''
if refs is not None:
self.crefs.append(cook_refs(refs))
if test is not None:
self.ctest.append(cook_test(test)) ## N.B.: -1
else:
self.ctest.append(None) # lens of crefs and ctest have to match
def size(self):
assert len(self.crefs) == len(self.ctest), "refs/test mismatch! %d<>%d" % (len(self.crefs), len(self.ctest))
return len(self.crefs)
def __iadd__(self, other):
'''add an instance (e.g., from another sentence).'''
if type(other) is tuple:
## avoid creating new CiderScorer instances
self.cook_append(other[0], other[1])
else:
self.ctest.extend(other.ctest)
self.crefs.extend(other.crefs)
return self
def compute_doc_freq(self):
'''
Compute term frequency for reference data.
This will be used to compute idf (inverse document frequency later)
The term frequency is stored in the object
:return: None
'''
for refs in self.crefs:
# refs, k ref captions of one image
for ngram in set([ngram for ref in refs for (ngram,count) in ref.iteritems()]):
self.document_frequency[ngram] += 1
# maxcounts[ngram] = max(maxcounts.get(ngram,0), count)
def compute_cider(self):
def counts2vec(cnts):
"""
Function maps counts of ngram to vector of tfidf weights.
The function returns vec, an array of dictionary that store mapping of n-gram and tf-idf weights.
The n-th entry of array denotes length of n-grams.
:param cnts:
:return: vec (array of dict), norm (array of float), length (int)
"""
vec = [defaultdict(float) for _ in range(self.n)]
length = 0
norm = [0.0 for _ in range(self.n)]
for (ngram,term_freq) in cnts.iteritems():
# give word count 1 if it doesn't appear in reference corpus
df = np.log(max(1.0, self.document_frequency[ngram]))
# ngram index
n = len(ngram)-1
# tf (term_freq) * idf (precomputed idf) for n-grams
vec[n][ngram] = float(term_freq)*(self.ref_len - df)
# compute norm for the vector. the norm will be used for computing similarity
norm[n] += pow(vec[n][ngram], 2)
if n == 1:
length += term_freq
norm = [np.sqrt(n) for n in norm]
return vec, norm, length
def sim(vec_hyp, vec_ref, norm_hyp, norm_ref, length_hyp, length_ref):
'''
Compute the cosine similarity of two vectors.
:param vec_hyp: array of dictionary for vector corresponding to hypothesis
:param vec_ref: array of dictionary for vector corresponding to reference
:param norm_hyp: array of float for vector corresponding to hypothesis
:param norm_ref: array of float for vector corresponding to reference
:param length_hyp: int containing length of hypothesis
:param length_ref: int containing length of reference
:return: array of score for each n-grams cosine similarity
'''
delta = float(length_hyp - length_ref)
# measure consine similarity
val = np.array([0.0 for _ in range(self.n)])
for n in range(self.n):
# ngram
for (ngram,count) in vec_hyp[n].iteritems():
# vrama91 : added clipping
val[n] += min(vec_hyp[n][ngram], vec_ref[n][ngram]) * vec_ref[n][ngram]
if (norm_hyp[n] != 0) and (norm_ref[n] != 0):
val[n] /= (norm_hyp[n]*norm_ref[n])
assert(not math.isnan(val[n]))
# vrama91: added a length based gaussian penalty
val[n] *= np.e**(-(delta**2)/(2*self.sigma**2))
return val
# compute log reference length
self.ref_len = np.log(float(len(self.crefs)))
scores = []
for test, refs in zip(self.ctest, self.crefs):
# compute vector for test captions
vec, norm, length = counts2vec(test)
# compute vector for ref captions
score = np.array([0.0 for _ in range(self.n)])
for ref in refs:
vec_ref, norm_ref, length_ref = counts2vec(ref)
score += sim(vec, vec_ref, norm, norm_ref, length, length_ref)
# change by vrama91 - mean of ngram scores, instead of sum
score_avg = np.mean(score)
# divide by number of references
score_avg /= len(refs)
# multiply score by 10
score_avg *= 10.0
# append score of an image to the score list
scores.append(score_avg)
return scores
def compute_score(self, option=None, verbose=0):
# compute idf
self.compute_doc_freq()
# assert to check document frequency
assert(len(self.ctest) >= max(self.document_frequency.values()))
# compute cider score
score = self.compute_cider()
# debug
# print score
return np.mean(np.array(score)), np.array(score)
================================================
FILE: refer/evaluation/meteor/__init__.py
================================================
__author__ = 'tylin'
================================================
FILE: refer/evaluation/meteor/meteor.py
================================================
#!/usr/bin/env python
# Python wrapper for METEOR implementation, by Xinlei Chen
# Acknowledge Michael Denkowski for the generous discussion and help
import os
import sys
import subprocess
import threading
# Assumes meteor-1.5.jar is in the same directory as meteor.py. Change as needed.
METEOR_JAR = 'meteor-1.5.jar'
# print METEOR_JAR
class Meteor:
def __init__(self):
self.meteor_cmd = ['java', '-jar', '-Xmx2G', METEOR_JAR, \
'-', '-', '-stdio', '-l', 'en', '-norm']
self.meteor_p = subprocess.Popen(self.meteor_cmd, \
cwd=os.path.dirname(os.path.abspath(__file__)), \
stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, \
stderr=subprocess.PIPE)
# Used to guarantee thread safety
self.lock = threading.Lock()
def compute_score(self, gts, res):
assert(gts.keys() == res.keys())
imgIds = gts.keys()
scores = []
eval_line = 'EVAL'
self.lock.acquire()
for i in imgIds:
assert(len(res[i]) == 1)
stat = self._stat(res[i][0], gts[i])
eval_line += ' ||| {}'.format(stat)
self.meteor_p.stdin.write('{}\n'.format(eval_line))
for i in range(0,len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())
self.lock.release()
return score, scores
def method(self):
return "METEOR"
def _stat(self, hypothesis_str, reference_list):
# SCORE ||| reference 1 words ||| reference n words ||| hypothesis words
hypothesis_str = hypothesis_str.replace('|||','').replace(' ',' ')
score_line = ' ||| '.join(('SCORE', ' ||| '.join(reference_list), hypothesis_str))
self.meteor_p.stdin.write('{}\n'.format(score_line))
return self.meteor_p.stdout.readline().strip()
def _score(self, hypothesis_str, reference_list):
self.lock.acquire()
# SCORE ||| reference 1 words ||| reference n words ||| hypothesis words
hypothesis_str = hypothesis_str.replace('|||','').replace(' ',' ')
score_line = ' ||| '.join(('SCORE', ' ||| '.join(reference_list), hypothesis_str))
self.meteor_p.stdin.write('{}\n'.format(score_line))
stats = self.meteor_p.stdout.readline().strip()
eval_line = 'EVAL ||| {}'.format(stats)
# EVAL ||| stats
self.meteor_p.stdin.write('{}\n'.format(eval_line))
score = float(self.meteor_p.stdout.readline().strip())
self.lock.release()
return score
def __exit__(self):
self.lock.acquire()
self.meteor_p.stdin.close()
self.meteor_p.wait()
self.lock.release()
================================================
FILE: refer/evaluation/readme.txt
================================================
This folder contains modified coco-caption evaluation, which is downloaded from https://github.com/tylin/coco-caption.git
and refEvaluation which is to be called by the refer algorithm.
More specifically, this folder contains:
1. bleu/
2. cider/
3. meteor/
4. rouge/
5. tokenizer/
6. __init__.py
7. refEvaluation.py
================================================
FILE: refer/evaluation/refEvaluation.py
================================================
from tokenizer.ptbtokenizer import PTBTokenizer
from bleu.bleu import Bleu
from meteor.meteor import Meteor
from rouge.rouge import Rouge
from cider.cider import Cider
"""
Input: refer and Res = [{ref_id, sent}]
Things of interest
evalRefs - list of ['ref_id', 'CIDEr', 'Bleu_1', 'Bleu_2', 'Bleu_3', 'Bleu_4', 'ROUGE_L', 'METEOR']
eval - dict of {metric: score}
refToEval - dict of {ref_id: ['ref_id', 'CIDEr', 'Bleu_1', 'Bleu_2', 'Bleu_3', 'Bleu_4', 'ROUGE_L', 'METEOR']}
"""
class RefEvaluation:
def __init__ (self, refer, Res):
"""
:param refer: refer class of current dataset
:param Res: [{'ref_id', 'sent'}]
"""
self.evalRefs = []
self.eval = {}
self.refToEval = {}
self.refer = refer
self.Res = Res
def evaluate(self):
evalRefIds = [ann['ref_id'] for ann in self.Res]
refToGts = {}
for ref_id in evalRefIds:
ref = self.refer.Refs[ref_id]
gt_sents = [sent['sent'].encode('ascii', 'ignore').decode('ascii') for sent in ref['sentences']] # up to 3 expressions
refToGts[ref_id] = gt_sents
refToRes = {ann['ref_id']: [ann['sent']] for ann in self.Res}
print 'tokenization...'
tokenizer = PTBTokenizer()
self.refToRes = tokenizer.tokenize(refToRes)
self.refToGts = tokenizer.tokenize(refToGts)
# =================================================
# Set up scorers
# =================================================
print 'setting up scorers...'
scorers = [
(Bleu(4), ["Bleu_1", "Bleu_2", "Bleu_3", "Bleu_4"]),
(Meteor(),"METEOR"),
(Rouge(), "ROUGE_L"),
(Cider(), "CIDEr")
]
# =================================================
# Compute scores
# =================================================
for scorer, method in scorers:
print 'computing %s score...'%(scorer.method())
score, scores = scorer.compute_score(self.refToGts, self.refToRes)
if type(method) == list:
for sc, scs, m in zip(score, scores, method):
self.setEval(sc, m)
self.setRefToEvalRefs(scs, self.refToGts.keys(), m)
print "%s: %0.3f"%(m, sc)
else:
self.setEval(score, method)
self.setRefToEvalRefs(scores, self.refToGts.keys(), method)
print "%s: %0.3f"%(method, score)
self.setEvalRefs()
def setEval(self, score, method):
self.eval[method] = score
def setRefToEvalRefs(self, scores, refIds, method):
for refId, score in zip(refIds, scores):
if not refId in self.refToEval:
self.refToEval[refId] = {}
self.refToEval[refId]["ref_id"] = refId
self.refToEval[refId][method] = score
def setEvalRefs(self):
self.evalRefs = [eval for refId, eval in self.refToEval.items()]
if __name__ == '__main__':
import os.path as osp
import sys
ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..'))
sys.path.insert(0, osp.join(ROOT_DIR, 'lib', 'datasets'))
from refer import REFER
# load refer of dataset
dataset = 'refcoco'
refer = REFER(dataset, splitBy = 'google')
# mimic some Res
val_refIds = refer.getRefIds(split='test')
ref_id = 49767
print "GD: %s" % refer.Refs[ref_id]['sentences']
Res = [{'ref_id': ref_id, 'sent': 'left bottle'}]
# evaluate some refer expressions
refEval = RefEvaluation(refer, Res)
refEval.evaluate()
# print output evaluation scores
for metric, score in refEval.eval.items():
print '%s: %.3f'%(metric, score)
# demo how to use evalImgs to retrieve low score result
# evals = [eva for eva in refEval.evalRefs if eva['CIDEr']<30]
# print 'ground truth sents'
# refId = evals[0]['ref_id']
# print 'refId: %s' % refId
# print [sent['sent'] for sent in refer.Refs[refId]['sentences']]
#
# print 'generated sent (CIDEr score %0.1f)' % (evals[0]['CIDEr'])
# print refEval.refToEval[8]
================================================
FILE: refer/evaluation/rouge/__init__.py
================================================
__author__ = 'vrama91'
================================================
FILE: refer/evaluation/rouge/rouge.py
================================================
#!/usr/bin/env python
#
# File Name : rouge.py
#
# Description : Computes ROUGE-L metric as described by Lin and Hovey (2004)
#
# Creation Date : 2015-01-07 06:03
# Author : Ramakrishna Vedantam
import numpy as np
import pdb
def my_lcs(string, sub):
"""
Calculates longest common subsequence for a pair of tokenized strings
:param string : list of str : tokens from a string split using whitespace
:param sub : list of str : shorter string, also split using whitespace
:returns: length (list of int): length of the longest common subsequence between the two strings
Note: my_lcs only gives length of the longest common subsequence, not the actual LCS
"""
if(len(string)< len(sub)):
sub, string = string, sub
lengths = [[0 for i in range(0,len(sub)+1)] for j in range(0,len(string)+1)]
for j in range(1,len(sub)+1):
for i in range(1,len(string)+1):
if(string[i-1] == sub[j-1]):
lengths[i][j] = lengths[i-1][j-1] + 1
else:
lengths[i][j] = max(lengths[i-1][j] , lengths[i][j-1])
return lengths[len(string)][len(sub)]
class Rouge():
'''
Class for computing ROUGE-L score for a set of candidate sentences for the MS COCO test set
'''
def __init__(self):
# vrama91: updated the value below based on discussion with Hovey
self.beta = 1.2
def calc_score(self, candidate, refs):
"""
Compute ROUGE-L score given one candidate and references for an image
:param candidate: str : candidate sentence to be evaluated
:param refs: list of str : COCO reference sentences for the particular image to be evaluated
:returns score: int (ROUGE-L score for the candidate evaluated against references)
"""
assert(len(candidate)==1)
assert(len(refs)>0)
prec = []
rec = []
# split into tokens
token_c = candidate[0].split(" ")
for reference in refs:
# split into tokens
token_r = reference.split(" ")
# compute the longest common subsequence
lcs = my_lcs(token_r, token_c)
prec.append(lcs/float(len(token_c)))
rec.append(lcs/float(len(token_r)))
prec_max = max(prec)
rec_max = max(rec)
if(prec_max!=0 and rec_max !=0):
score = ((1 + self.beta**2)*prec_max*rec_max)/float(rec_max + self.beta**2*prec_max)
else:
score = 0.0
return score
def compute_score(self, gts, res):
"""
Computes Rouge-L score given a set of reference and candidate sentences for the dataset
Invoked by evaluate_captions.py
:param hypo_for_image: dict : candidate / test sentences with "image name" key and "tokenized sentences" as values
:param ref_for_image: dict : reference MS-COCO sentences with "image name" key and "tokenized sentences" as values
:returns: average_score: float (mean ROUGE-L score computed by averaging scores for all the images)
"""
assert(gts.keys() == res.keys())
imgIds = gts.keys()
score = []
for id in imgIds:
hypo = res[id]
ref = gts[id]
score.append(self.calc_score(hypo, ref))
# Sanity check.
assert(type(hypo) is list)
assert(len(hypo) == 1)
assert(type(ref) is list)
assert(len(ref) > 0)
average_score = np.mean(np.array(score))
return average_score, np.array(score)
def method(self):
return "Rouge"
================================================
FILE: refer/evaluation/tokenizer/__init__.py
================================================
__author__ = 'hfang'
================================================
FILE: refer/evaluation/tokenizer/ptbtokenizer.py
================================================
#!/usr/bin/env python
#
# File Name : ptbtokenizer.py
#
# Description : Do the PTB Tokenization and remove punctuations.
#
# Creation Date : 29-12-2014
# Last Modified : Thu Mar 19 09:53:35 2015
# Authors : Hao Fang and Tsung-Yi Lin
import os
import sys
import subprocess
import tempfile
import itertools
# path to the stanford corenlp jar
STANFORD_CORENLP_3_4_1_JAR = 'stanford-corenlp-3.4.1.jar'
# punctuations to be removed from the sentences
PUNCTUATIONS = ["''", "'", "``", "`", "-LRB-", "-RRB-", "-LCB-", "-RCB-", \
".", "?", "!", ",", ":", "-", "--", "...", ";"]
class PTBTokenizer:
"""Python wrapper of Stanford PTBTokenizer"""
def tokenize(self, captions_for_image):
cmd = ['java', '-cp', STANFORD_CORENLP_3_4_1_JAR, \
'edu.stanford.nlp.process.PTBTokenizer', \
'-preserveLines', '-lowerCase']
# ======================================================
# prepare data for PTB Tokenizer
# ======================================================
final_tokenized_captions_for_image = {}
image_id = [k for k, v in captions_for_image.items() for _ in range(len(v))]
sentences = '\n'.join([c.replace('\n', ' ') for k, v in captions_for_image.items() for c in v])
# ======================================================
# save sentences to temporary file
# ======================================================
path_to_jar_dirname=os.path.dirname(os.path.abspath(__file__))
tmp_file = tempfile.NamedTemporaryFile(delete=False, dir=path_to_jar_dirname)
tmp_file.write(sentences)
tmp_file.close()
# ======================================================
# tokenize sentence
# ======================================================
cmd.append(os.path.basename(tmp_file.name))
p_tokenizer = subprocess.Popen(cmd, cwd=path_to_jar_dirname, \
stdout=subprocess.PIPE)
token_lines = p_tokenizer.communicate(input=sentences.rstrip())[0]
lines = token_lines.split('\n')
# remove temp file
os.remove(tmp_file.name)
# ======================================================
# create dictionary for tokenized captions
# ======================================================
for k, line in zip(image_id, lines):
if not k in final_tokenized_captions_for_image:
final_tokenized_captions_for_image[k] = []
tokenized_caption = ' '.join([w for w in line.rstrip().split(' ') \
if w not in PUNCTUATIONS])
final_tokenized_captions_for_image[k].append(tokenized_caption)
return final_tokenized_captions_for_image
================================================
FILE: refer/external/README.md
================================================
The codes inside this folder are copied from pycocotools: https://github.com/pdollar/coco
================================================
FILE: refer/external/__init__.py
================================================
__author__ = 'tylin'
================================================
FILE: refer/external/_mask.pyx
================================================
# distutils: language = c
# distutils: sources = external/maskApi.c
#**************************************************************************
# Microsoft COCO Toolbox. version 2.0
# Data, paper, and tutorials available at: http://mscoco.org/
# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
# Licensed under the Simplified BSD License [see coco/license.txt]
#**************************************************************************
__author__ = 'tsungyi'
# import both Python-level and C-level symbols of Numpy
# the API uses Numpy to interface C and Python
import numpy as np
cimport numpy as np
from libc.stdlib cimport malloc, free
# intialized Numpy. must do.
np.import_array()
# import numpy C function
# we use PyArray_ENABLEFLAGS to make Numpy ndarray responsible to memoery management
cdef extern from "numpy/arrayobject.h":
void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)
# Declare the prototype of the C functions in MaskApi.h
cdef extern from "maskApi.h":
ctypedef unsigned int uint
ctypedef unsigned long siz
ctypedef unsigned char byte
ctypedef double* BB
ctypedef struct RLE:
siz h,
siz w,
siz m,
uint* cnts,
void rlesInit( RLE **R, siz n )
void rleEncode( RLE *R, const byte *M, siz h, siz w, siz n )
void rleDecode( const RLE *R, byte *mask, siz n )
void rleMerge( const RLE *R, RLE *M, siz n, bint intersect )
void rleArea( const RLE *R, siz n, uint *a )
void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o )
void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o )
void rleToBbox( const RLE *R, BB bb, siz n )
void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n )
void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w )
char* rleToString( const RLE *R )
void rleFrString( RLE *R, char *s, siz h, siz w )
# python class to wrap RLE array in C
# the class handles the memory allocation and deallocation
cdef class RLEs:
cdef RLE *_R
cdef siz _n
def __cinit__(self, siz n =0):
rlesInit(&self._R, n)
self._n = n
# free the RLE array here
def __dealloc__(self):
if self._R is not NULL:
for i in range(self._n):
free(self._R[i].cnts)
free(self._R)
def __getattr__(self, key):
if key == 'n':
return self._n
raise AttributeError(key)
# python class to wrap Mask array in C
# the class handles the memory allocation and deallocation
cdef class Masks:
cdef byte *_mask
cdef siz _h
cdef siz _w
cdef siz _n
def __cinit__(self, h, w, n):
self._mask = malloc(h*w*n* sizeof(byte))
self._h = h
self._w = w
self._n = n
# def __dealloc__(self):
# the memory management of _mask has been passed to np.ndarray
# it doesn't need to be freed here
# called when passing into np.array() and return an np.ndarray in column-major order
def __array__(self):
cdef np.npy_intp shape[1]
shape[0] = self._h*self._w*self._n
# Create a 1D array, and reshape it to fortran/Matlab column-major array
ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')
# The _mask allocated by Masks is now handled by ndarray
PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)
return ndarray
# internal conversion from Python RLEs object to compressed RLE format
def _toString(RLEs Rs):
cdef siz n = Rs.n
cdef bytes py_string
cdef char* c_string
objs = []
for i in range(n):
c_string = rleToString( &Rs._R[i] )
py_string = c_string
objs.append({
'size': [Rs._R[i].h, Rs._R[i].w],
'counts': py_string
})
free(c_string)
return objs
# internal conversion from compressed RLE format to Python RLEs object
def _frString(rleObjs):
cdef siz n = len(rleObjs)
Rs = RLEs(n)
cdef bytes py_string
cdef char* c_string
for i, obj in enumerate(rleObjs):
py_string = str(obj['counts'])
c_string = py_string
rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] )
return Rs
# encode mask to RLEs objects
# list of RLE string can be generated by RLEs member function
def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):
h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
cdef RLEs Rs = RLEs(n)
rleEncode(Rs._R,mask.data,h,w,n)
objs = _toString(Rs)
return objs
# decode mask from compressed list of RLE string or RLEs object
def decode(rleObjs):
cdef RLEs Rs = _frString(rleObjs)
h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
masks = Masks(h, w, n)
rleDecode( Rs._R, masks._mask, n );
return np.array(masks)
def merge(rleObjs, bint intersect=0):
cdef RLEs Rs = _frString(rleObjs)
cdef RLEs R = RLEs(1)
rleMerge(Rs._R, R._R, Rs._n, intersect)
obj = _toString(R)[0]
return obj
def area(rleObjs):
cdef RLEs Rs = _frString(rleObjs)
cdef uint* _a = malloc(Rs._n* sizeof(uint))
rleArea(Rs._R, Rs._n, _a)
cdef np.npy_intp shape[1]
shape[0] = Rs._n
a = np.array((Rs._n, ), dtype=np.uint8)
a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)
return a
# iou computation. support function overload (RLEs-RLEs and bbox-bbox).
def iou( dt, gt, pyiscrowd ):
def _preproc(objs):
if len(objs) == 0:
return objs
if type(objs) == np.ndarray:
if len(objs.shape) == 1:
objs = objs.reshape((objs[0], 1))
# check if it's Nx4 bbox
if not len(objs.shape) == 2 or not objs.shape[1] == 4:
raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
objs = objs.astype(np.double)
elif type(objs) == list:
# check if list is in box format and convert it to np.ndarray
isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
isrle = np.all(np.array([type(obj) == dict for obj in objs]))
if isbox:
objs = np.array(objs, dtype=np.double)
if len(objs.shape) == 1:
objs = objs.reshape((1,objs.shape[0]))
elif isrle:
objs = _frString(objs)
else:
raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')
else:
raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
return objs
def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
def _len(obj):
cdef siz N = 0
if type(obj) == RLEs:
N = obj.n
elif len(obj)==0:
pass
elif type(obj) == np.ndarray:
N = obj.shape[0]
return N
# convert iscrowd to numpy array
cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)
# simple type checking
cdef siz m, n
dt = _preproc(dt)
gt = _preproc(gt)
m = _len(dt)
n = _len(gt)
if m == 0 or n == 0:
return []
if not type(dt) == type(gt):
raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')
# define local variables
cdef double* _iou = 0
cdef np.npy_intp shape[1]
# check type and assign iou function
if type(dt) == RLEs:
_iouFun = _rleIou
elif type(dt) == np.ndarray:
_iouFun = _bbIou
else:
raise Exception('input data type not allowed.')
_iou = malloc(m*n* sizeof(double))
iou = np.zeros((m*n, ), dtype=np.double)
shape[0] = m*n
iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
_iouFun(dt, gt, iscrowd, m, n, iou)
return iou.reshape((m,n), order='F')
def toBbox( rleObjs ):
cdef RLEs Rs = _frString(rleObjs)
cdef siz n = Rs.n
cdef BB _bb = malloc(4*n* sizeof(double))
rleToBbox( Rs._R, _bb, n )
cdef np.npy_intp shape[1]
shape[0] = 4*n
bb = np.array((1,4*n), dtype=np.double)
bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)
return bb
def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):
cdef siz n = bb.shape[0]
Rs = RLEs(n)
rleFrBbox( Rs._R, bb.data, h, w, n )
objs = _toString(Rs)
return objs
def frPoly( poly, siz h, siz w ):
cdef np.ndarray[np.double_t, ndim=1] np_poly
n = len(poly)
Rs = RLEs(n)
for i, p in enumerate(poly):
np_poly = np.array(p, dtype=np.double, order='F')
rleFrPoly( &Rs._R[i], np_poly.data, len(np_poly)/2, h, w )
objs = _toString(Rs)
return objs
def frUncompressedRLE(ucRles, siz h, siz w):
cdef np.ndarray[np.uint32_t, ndim=1] cnts
cdef RLE R
cdef uint *data
n = len(ucRles)
objs = []
for i in range(n):
Rs = RLEs(1)
cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)
# time for malloc can be saved here but it's fine
data = malloc(len(cnts)* sizeof(uint))
for j in range(len(cnts)):
data[j] = cnts[j]
R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
Rs._R[0] = R
objs.append(_toString(Rs)[0])
return objs
def frPyObjects(pyobj, siz h, w):
if type(pyobj) == np.ndarray:
objs = frBbox(pyobj, h, w )
elif type(pyobj) == list and len(pyobj[0]) == 4:
objs = frBbox(pyobj, h, w )
elif type(pyobj) == list and len(pyobj[0]) > 4:
objs = frPoly(pyobj, h, w )
elif type(pyobj) == list and type(pyobj[0]) == dict:
objs = frUncompressedRLE(pyobj, h, w)
else:
raise Exception('input type is not supported.')
return objs
================================================
FILE: refer/external/mask.py
================================================
__author__ = 'tsungyi'
import external._mask as _mask
# Interface for manipulating masks stored in RLE format.
#
# RLE is a simple yet efficient format for storing binary masks. RLE
# first divides a vector (or vectorized image) into a series of piecewise
# constant regions and then for each piece simply stores the length of
# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would
# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1]
# (note that the odd counts are always the numbers of zeros). Instead of
# storing the counts directly, additional compression is achieved with a
# variable bitrate representation based on a common scheme called LEB128.
#
# Compression is greatest given large piecewise constant regions.
# Specifically, the size of the RLE is proportional to the number of
# *boundaries* in M (or for an image the number of boundaries in the y
# direction). Assuming fairly simple shapes, the RLE representation is
# O(sqrt(n)) where n is number of pixels in the object. Hence space usage
# is substantially lower, especially for large simple objects (large n).
#
# Many common operations on masks can be computed directly using the RLE
# (without need for decoding). This includes computations such as area,
# union, intersection, etc. All of these operations are linear in the
# size of the RLE, in other words they are O(sqrt(n)) where n is the area
# of the object. Computing these operations on the original mask is O(n).
# Thus, using the RLE can result in substantial computational savings.
#
# The following API functions are defined:
# encode - Encode binary masks using RLE.
# decode - Decode binary masks encoded via RLE.
# merge - Compute union or intersection of encoded masks.
# iou - Compute intersection over union between masks.
# area - Compute area of encoded masks.
# toBbox - Get bounding boxes surrounding encoded masks.
# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.
#
# Usage:
# Rs = encode( masks )
# masks = decode( Rs )
# R = merge( Rs, intersect=false )
# o = iou( dt, gt, iscrowd )
# a = area( Rs )
# bbs = toBbox( Rs )
# Rs = frPyObjects( [pyObjects], h, w )
#
# In the API the following formats are used:
# Rs - [dict] Run-length encoding of binary masks
# R - dict Run-length encoding of binary mask
# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order)
# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore
# bbs - [nx4] Bounding box(es) stored as [x y w h]
# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list)
# dt,gt - May be either bounding boxes or encoded masks
# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel).
#
# Finally, a note about the intersection over union (iou) computation.
# The standard iou of a ground truth (gt) and detected (dt) object is
# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt))
# For "crowd" regions, we use a modified criteria. If a gt object is
# marked as "iscrowd", we allow a dt to match any subregion of the gt.
# Choosing gt' in the crowd gt that best matches the dt can be done using
# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing
# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt)
# For crowd gt regions we use this modified criteria above for the iou.
#
# To compile run "python setup.py build_ext --inplace"
# Please do not contact us for help with compiling.
#
# Microsoft COCO Toolbox. version 2.0
# Data, paper, and tutorials available at: http://mscoco.org/
# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
# Licensed under the Simplified BSD License [see coco/license.txt]
encode = _mask.encode
decode = _mask.decode
iou = _mask.iou
merge = _mask.merge
area = _mask.area
toBbox = _mask.toBbox
frPyObjects = _mask.frPyObjects
================================================
FILE: refer/external/maskApi.c
================================================
/**************************************************************************
* Microsoft COCO Toolbox. version 2.0
* Data, paper, and tutorials available at: http://mscoco.org/
* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
* Licensed under the Simplified BSD License [see coco/license.txt]
**************************************************************************/
#include "maskApi.h"
#include
#include
uint umin( uint a, uint b ) { return (ab) ? a : b; }
void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ) {
R->h=h; R->w=w; R->m=m; R->cnts=(m==0)?0:malloc(sizeof(uint)*m);
siz j; if(cnts) for(j=0; jcnts[j]=cnts[j];
}
void rleFree( RLE *R ) {
free(R->cnts); R->cnts=0;
}
void rlesInit( RLE **R, siz n ) {
siz i; *R = (RLE*) malloc(sizeof(RLE)*n);
for(i=0; i0 ) {
c=umin(ca,cb); cc+=c; ct=0;
ca-=c; if(!ca && a0) {
crowd=iscrowd!=NULL && iscrowd[g];
if(dt[d].h!=gt[g].h || dt[d].w!=gt[g].w) { o[g*m+d]=-1; continue; }
siz ka, kb, a, b; uint c, ca, cb, ct, i, u; int va, vb;
ca=dt[d].cnts[0]; ka=dt[d].m; va=vb=0;
cb=gt[g].cnts[0]; kb=gt[g].m; a=b=1; i=u=0; ct=1;
while( ct>0 ) {
c=umin(ca,cb); if(va||vb) { u+=c; if(va&&vb) i+=c; } ct=0;
ca-=c; if(!ca && athr) keep[j]=0;
}
}
}
void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) {
double h, w, i, u, ga, da; siz g, d; int crowd;
for( g=0; gthr) keep[j]=0;
}
}
}
void rleToBbox( const RLE *R, BB bb, siz n ) {
siz i; for( i=0; id?1:c=dy && xs>xe) || (dxye);
if(flip) { t=xs; xs=xe; xe=t; t=ys; ys=ye; ye=t; }
s = dx>=dy ? (double)(ye-ys)/dx : (double)(xe-xs)/dy;
if(dx>=dy) for( d=0; d<=dx; d++ ) {
t=flip?dx-d:d; u[m]=t+xs; v[m]=(int)(ys+s*t+.5); m++;
} else for( d=0; d<=dy; d++ ) {
t=flip?dy-d:d; v[m]=t+ys; u[m]=(int)(xs+s*t+.5); m++;
}
}
/* get points along y-boundary and downsample */
free(x); free(y); k=m; m=0; double xd, yd;
x=malloc(sizeof(int)*k); y=malloc(sizeof(int)*k);
for( j=1; jw-1 ) continue;
yd=(double)(v[j]h) yd=h; yd=ceil(yd);
x[m]=(int) xd; y[m]=(int) yd; m++;
}
/* compute rle encoding given y-boundary points */
k=m; a=malloc(sizeof(uint)*(k+1));
for( j=0; j0) b[m++]=a[j++]; else {
j++; if(jm, p=0; long x; int more;
char *s=malloc(sizeof(char)*m*6);
for( i=0; icnts[i]; if(i>2) x-=(long) R->cnts[i-2]; more=1;
while( more ) {
char c=x & 0x1f; x >>= 5; more=(c & 0x10) ? x!=-1 : x!=0;
if(more) c |= 0x20; c+=48; s[p++]=c;
}
}
s[p]=0; return s;
}
void rleFrString( RLE *R, char *s, siz h, siz w ) {
siz m=0, p=0, k; long x; int more; uint *cnts;
while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0;
while( s[p] ) {
x=0; k=0; more=1;
while( more ) {
char c=s[p]-48; x |= (c & 0x1f) << 5*k;
more = c & 0x20; p++; k++;
if(!more && (c & 0x10)) x |= -1 << 5*k;
}
if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x;
}
rleInit(R,h,w,m,cnts); free(cnts);
}
================================================
FILE: refer/external/maskApi.h
================================================
/**************************************************************************
* Microsoft COCO Toolbox. version 2.0
* Data, paper, and tutorials available at: http://mscoco.org/
* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
* Licensed under the Simplified BSD License [see coco/license.txt]
**************************************************************************/
#pragma once
typedef unsigned int uint;
typedef unsigned long siz;
typedef unsigned char byte;
typedef double* BB;
typedef struct { siz h, w, m; uint *cnts; } RLE;
/* Initialize/destroy RLE. */
void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts );
void rleFree( RLE *R );
/* Initialize/destroy RLE array. */
void rlesInit( RLE **R, siz n );
void rlesFree( RLE **R, siz n );
/* Encode binary masks using RLE. */
void rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n );
/* Decode binary masks encoded via RLE. */
void rleDecode( const RLE *R, byte *mask, siz n );
/* Compute union or intersection of encoded masks. */
void rleMerge( const RLE *R, RLE *M, siz n, int intersect );
/* Compute area of encoded masks. */
void rleArea( const RLE *R, siz n, uint *a );
/* Compute intersection over union between masks. */
void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o );
/* Compute non-maximum suppression between bounding masks */
void rleNms( RLE *dt, siz n, uint *keep, double thr );
/* Compute intersection over union between bounding boxes. */
void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o );
/* Compute non-maximum suppression between bounding boxes */
void bbNms( BB dt, siz n, uint *keep, double thr );
/* Get bounding boxes surrounding encoded masks. */
void rleToBbox( const RLE *R, BB bb, siz n );
/* Convert bounding boxes to encoded masks. */
void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n );
/* Convert polygon to encoded mask. */
void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w );
/* Get compressed string representation of encoded mask. */
char* rleToString( const RLE *R );
/* Convert from compressed string representation of encoded mask. */
void rleFrString( RLE *R, char *s, siz h, siz w );
================================================
FILE: refer/pyEvalDemo.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"from refer import REFER\n",
"import numpy as np\n",
"import sys\n",
"import os.path as osp\n",
"import json\n",
"import matplotlib.pyplot as plt\n",
"from matplotlib.patches import Rectangle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"loading dataset refcoco into memory...\n",
"creating index...\n",
"index created.\n",
"DONE (t=9.47s)\n"
]
}
],
"source": [
"data_root = './data' # contains refclef, refcoco, refcoco+, refcocog and images\n",
"dataset = 'refcoco'\n",
"splitBy = 'unc'\n",
"refer = REFER(data_root, dataset, splitBy)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Evaluate Refering Expressions by Language Metrics"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sys.path.insert(0, './evaluation')\n",
"from refEvaluation import RefEvaluation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{u'sent': u'man in black', u'ref_id': 47}\n"
]
}
],
"source": [
"# Here's our example expression file\n",
"sample_expr_file = json.load(open('test/sample_expressions_testA.json', 'r'))\n",
"sample_exprs = sample_expr_file['predictions']\n",
"print sample_exprs[0]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tokenization...\n",
"setting up scorers...\n",
"computing Bleu score...\n",
"{'reflen': 5356, 'guess': [5009, 3034, 1477, 275], 'testlen': 5009, 'correct': [2576, 580, 112, 2]}\n",
"ratio: 0.935212845407\n",
"Bleu_1: 0.480\n",
"Bleu_2: 0.293\n",
"Bleu_3: 0.182\n",
"Bleu_4: 0.080\n",
"computing METEOR score...\n",
"METEOR: 0.172\n",
"computing Rouge score...\n",
"ROUGE_L: 0.414\n",
"computing CIDEr score...\n",
"CIDEr: 0.669\n"
]
}
],
"source": [
"refEval = RefEvaluation(refer, sample_exprs)\n",
"refEval.evaluate()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Evaluate Referring Expressions by Duplicate Rate"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"108/750 (14.40%) images have duplicate predicted sentences.\n"
]
}
],
"source": [
"# evalue how many images contain duplicate expressions\n",
"pred_refToSent = {int(it['ref_id']): it['sent'] for it in sample_exprs}\n",
"pred_imgToSents = {}\n",
"for ref_id, pred_sent in pred_refToSent.items():\n",
" image_id = refer.Refs[ref_id]['image_id']\n",
" pred_imgToSents[image_id] = pred_imgToSents.get(image_id, []) + [pred_sent]\n",
"# count duplicate\n",
"duplicate = 0\n",
"for image_id, sents in pred_imgToSents.items():\n",
" if len(set(sents)) < len(sents):\n",
" duplicate += 1\n",
"ratio = duplicate*100.0 / len(pred_imgToSents)\n",
"print '%s/%s (%.2f%%) images have duplicate predicted sentences.' % (duplicate, len(pred_imgToSents), ratio)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"# 3.Evaluate Referring Comprehension"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# IoU function\n",
"def computeIoU(box1, box2):\n",
" # each box is of [x1, y1, w, h]\n",
" inter_x1 = max(box1[0], box2[0])\n",
" inter_y1 = max(box1[1], box2[1])\n",
" inter_x2 = min(box1[0]+box1[2]-1, box2[0]+box2[2]-1)\n",
" inter_y2 = min(box1[1]+box1[3]-1, box2[1]+box2[3]-1)\n",
"\n",
" if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n",
" inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n",
" else:\n",
" inter = 0\n",
" union = box1[2]*box1[3] + box2[2]*box2[3] - inter\n",
" return float(inter)/union"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# randomly sample one ref\n",
"ref_ids = refer.getRefIds()\n",
"ref_id = ref_ids[np.random.randint(0, len(ref_ids))]\n",
"ref = refer.Refs[ref_id]\n",
"\n",
"# let's fake one bounding box by randomly picking one instance inside this image\n",
"image_id = ref['image_id']\n",
"anns = refer.imgToAnns[image_id]\n",
"ann = anns[np.random.randint(0, len(anns))]"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. person bending\n",
"2. man\n",
"3. the person bending over\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXe0Jdd13vnb55yqG19+nbvR3WjEBkGCJADShASAECmR\nw2EwFShqFExzybJspbHWWLIVCMmUrJG1Zlkyh9J4lCzJosIwSCLFKGYQIEDERmoADXSOL793U9U5\ne88fdV/3a7AbxJAEW57V31pvvXvr1j2hzv7OjlVXzIyLuIiL+NpwF3oAF3ER/6PgIlku4iKeJy6S\n5SIu4nniIlku4iKeJy6S5SIu4nniIlku4iKeJ14QsojI60TkcRF5UkR+7oXo4yIu4lsN+WbnWUTE\nA3uB1wBHgHuAt5vZY9/Uji7iIr7FeCE0y43AU2a238xK4C+AN78A/VzERXxL8UKQZQtwaM37w8Nj\nF3ER/0PjhSDLxfqZi/j/JcIL0OYRYNua99uotMtpiMhFQl3EP2qYmTz72AtBlq8Al4vIDuAo8Dbg\n7c8+qRgs0ul00LIPtoIrC4qiR54FEjm+3iJkAcOQZNTaE1hMqBgiGWYJZwUpFZhvoEWXmEpCyElJ\nqWWKmUPNyBvTSBjBiavUnhnOVdfiV27/Vd51+y8jSPWZgOkZLotU55kBWqJmIAlUEAk4H0CMg5/5\n98z7dWzY8R1s2n4d5wqciAhmdvozwTBApFLwt99+O+9617uG/Q2/nwoUo1LYgrgMcYIlATFMIyYG\nJnifVS2mznA+HmEAlqHiEZeBJSADSYh4rFxCrMQsIc4Tiz6qBf/xt/6AX/6Ff4dYQRRP8AlNhrgc\nfIZJwJmjM/8IPoxTH92GOBAzEHfOeT/XsTPX+auP3X777dx+++1fdT2fL05f7zV9rL4+F8732Ted\nLGYWReQngI8DHviDc0XCThy8n1DzpAGk1MNJTq3WJPmAeEfeHEGKPtFB3mhSdFcIeQ0d9Ch78yhQ\nr49RDApqDZC8Sc07ykGXeqNFjAXiPJZ6OC2qC+QETBEnCAJyWlwrEiCIgaoOF1NwLmEmiICKIuJB\nMkwMw2GW0KHYqwxQLajiGgEzELGqPe+QoQiLc8MFc0MWnlkgEYeZDsks4Go4q87HJcQUcGgAh8ec\nAxQdEtyJgzAyXAwQaqglvAkaF/E+JyJ4UyjmSeIJaqhvQMgI2QSS+lWbaQkTw5Fj0RASkgy0QFA0\ntGhNXQsuQpJqLrI6N8AcSMLMnznGVwviWuF8LiH+evHsNr/ePl4IzYKZfRT46HOekyJWgHMBtZzW\n6CSaIv3eCidnjpFq93Hp1G40ePrFCVq1Nr1BQb01hW9OkGmiKAoaY6OgELJ2tbs2xxCBkEfUBOlE\n+p1T5KNNkkIyTx5qqDgwHY5XQRNJCwzDSYa4SmMkNbwYKfYRhKQd0IS4DB8CaobzGV48QRxlfwnT\nhGoCSjA/1BwOxWHigYTIqnOnYP7MddEEQwqDqxZWjJIBg9nDlGXJ1KZdOM0wEs6FIanP7Jaq1bxE\nIKUB3nkMwflxlD5OlzDXgHwM53JME14MNUXNY1kNE8G5gOTjIILooKK6r4FkpDTAiWAKTsJpDqzu\n4maxIrBFkAwhw1yJEBCR0xvS2u9UY35+gnw+TfVCkG0VLwhZng8ETz20iNojZBmmCSuVkVabkekX\nE2dO4AQkJcTl9Is+pjDwK5gELCn1VhMtE+J9tTCaMIvE1Me7Bi4D35jCM8788UcR7TM6eQnLiz1w\nRp7lvOqGXdjgOCkpztdIBiIFmgKSVsDVOH74Pg7vvZ/lfsHLb3g97emNlIMuQk5SRQpX7cACmjqU\nxQouFZX2MU/mM8Q5BoMePmsgHlLZBQwVT5aNYb7GLbfciohVekocZpVQGEqxfJhP/fGvMDbZZqrd\nYmL3brbt/mdDjVIJyarAOLdqBhnO1StNaq4ilzVRV0Mw1AWCCUkcCjjvKjPU4JZbX4urr0MVnFQa\nWX2G6grBRsHnOFXMS0UUAYdbI8QB5wUsVB9ipN4xyKZxvlbpc1vVppVEDE87PfZKMwu33HLLOcza\n4VZjVpnXpsNzzh2zqq6Nssrqr4dU3/Sk5PPqVMROHbwLMc/yyiKj49N47yv/QgJzeogRvxG0QYrL\npDig1WqhOFQFjyPU27i8AUkItQb4xtB88dV/KjPKKFFLMFjAhxaat6uFt4TXPiggA9TqqPbwrknR\nXyCr1QlZDZVWZT7pMqXVCLFLigNMPL7ZIuLJXM6Bz/4CK2GM+tR17Lz626HXw4kw6PcxHRC8Yq6N\nr7UhBJxVBKp0yKom8OfcMSs/qdI3zoykggzmoDGKG+53Is8d2Fzddc/V/vAMVNf6UWebRme+lzAL\niNhZO/nZ51SMUysRyTFLw+MKbqjRdUjeIDjLSBorDTjUVliPWCwQ8glMapX56Yemp62OV09vDKu+\noIiACckpEnuIZDhfQygxq6wFGxrEkFgl1znm+y1x8J8ftNrFxiY20Gg1sFSw75HPcenltzFYSrTG\nAo16AMag7NMvIxKEWl4ny2tojKRYoiJovyQ0BOc84HGSUM0wFLGAahfnwYqCMh4hz2s4a2K+gVmB\nd5MEiWAZyaAxshFD0OFupVpggxUyt0SpAZ+PonGWsldCFI7N7KcO9CXD95dYPPYYeZggtKbwo9NI\nLHFkuExI+EoYnIAoppXvIny1IA+ZDJZwEiozsjuDNaaYO3YHU9vfiDn7mkQBzhLqc2FV0IRVn82f\n09Y386fHtfbzlNKwnYQTIPUpyxWybBQJzeG5jhTLSoOVp0ipx2DQpzEyjWqOD0qMPXxoksyR1cfQ\nskBYwhEoBwXO19EwAuZPB2nMKq1RFY8oRoLeDNKYQsyTrESSIMwi1EgIIjlqESc5IoL3nq+FC6dZ\nDt1LrdXCopF5jw8tkvY41jvBhKuh/RLEk9cyup0VGqPTOBcIvonkTUpN1EOdbu8kkiJZyPF5C5MR\nHAOQPuJzxI1jRJxYpWHUwBIiAdMCT45lgWSCc4IM5TOlAd5nqCRSOYDBAJdlpHIJ7+ogGfgM53Mw\nOH7HrzDvR5ja8Z2sW78bWY1MWcK5jKQKUuJ1QCyWkfp6LPYQn+NdrdItIqCKUFD2FihjF3Ft6u0J\nhAxBqsiYJsQi+GwoIKev63mv+VrNcv7zViN15yaWmVUjELd6xlkEjzEOBdgqn8+GWsoBlrDYAUYx\nV+KkEnKcw+Eq/1IS2DAqqYbpIk4HSEqo1XBZjlFASiQJBCckLcEUT06yHhoLXMjxoY66Ng6txmFL\n4CcwV6sCH8/SiKtm7Jpj/3g0S1n00ZRINqDZnKTbOUg+kkNnCTe5k7F1uxgU8+TUCM1lxClOBe8c\n/d4cghK1QbMxSkqOLG+hqSDkgWQOxwgqBam/QOrPYH6MWnuU5c4itWYDLQoa9SYJwVIXqFEM5rHS\nEJ8RPGhpiMtwGnH1EcxnhNpoZeIB6NDcEyPiEXIyqWFD29+JH0aCCpyBOo+5FpLXQBPeBpi0MRxu\n1V53HsjIWxvJzcBVglmtneEQJGRA9rwc2rXCbJYwey4NI1+zvbIckOX10xHCtU66935oDoGUC1VU\nUAK4OqIOsxJhAUkRcQ4plqC+ASv7uHxsSBipfCQPhFFMDdUSYh8bzGJxgFoJPqdQR57VMBXIwOV1\npL5pGL6OVWRO82rjlA3Da1n5RmvNxrVEeS5cMLLUvCelgthfhizQ1wOMFtcytu4SijKxfOIBYhin\nE/u4WJA3x3Akohmdzhy13BP7iZhN4BstUlniLNKZPUAyxWcNGrVRAgZZQKxH0V1Blpc59Phhlk7s\n57IXX0VobSDUxvB5m7w2grpU+TnqcT4naUkIo1WmQ8FJrIIBrgoqmBMcShCl21lCULwaiYj5DFSB\nWJHHBCyrzPayg4VRxBmmIFL5UU48MXVxvl3lLYBKiKv/q3i+kZ+156UU8b7G/9cii7WEC1mtcvrd\nuUy0ijCqCmGMVY9MFMQZzsaHmwFgjhQmEFEIeXUNeJbgmlRazAnUmlX0TgSnEVwYjm0AZBgBXOWN\nVNPLcAj4BDSHPtZq219tRj4fXDCytKYvRZwyrtVOOXd4kb7v02hfTd0y6qKVr+BreBmg/ZJB5xTO\ne6bW7aToLpPXc8ro8C7D+xoS2jTz9eArf8CZI6knEMEZvojkmy6hvemaoRBZZZ7FAS52WFlYoFZv\nYtQRAScJGyzSjccwLQlZjrkGWQhoMgZxgbLfx+X1ypyo5WjRoaSsciPJYak7TCQObWTnsNglRUNc\nF08LBFTD6cULfnSNeQA2DABw2jF9/tGcted5n7MaOft6UH3Pcz6yrbbrnDttzq1yfDXhe+Zkw1OF\n1GWonZ89XiQgZlW+zAzEV0Lvwhrzs3Hm/CEZ7Kw5rjVTV199fVVeF8xnUY2IOdRFsIBYRNFq93Zh\nmES0Kk/hZKjSMxyQ0jBGLw7TCE6GO4dWhopWGfpkVQ47pj6WIiFrVlnsZ4+HYTZ8jb1uGE5BhWEy\nM0IqqkibhEprmGE6QMg48eVf5WCvyWVXv5HRVgsak5C3wBwOxazEWSARcJQYWUVoBGU1N6k4V4Va\nzzivaTi31YTl1x/6/EbwQucwLnS/54jsfVWnF+xOyZQiRbGCxUQqC0wFL/lqnpBO7zhVaM8QIs6y\nSi2bVDuMSziJdOYPYEkRg4WZwwQE+ouUcVCpYRxiUgm35MDZYcaKdMPgrQyTgDbMhPsqd4AKQga+\nhYQRzNcR81U5ia+Bc6jWafpAY/IS3Mgu0mAZLFSJO/M41wJfmQq4MIyABcqkVPkQhmFvq0wy1dPX\nypQqd6nxWT7It26je74m39pynm9Vv98sqOpzjv2CkaU7v5/F7gN0e0/TW3ycxeJv0V6f+dknEVNC\n7TGwDDMlkg2TWAAJ7wwsYOYZmboU7wPgmZzeWWWqm1NkoTkkluB8i+Dy02rYObcmcTc8dtrAGWqS\n4VHcMJpTNTX8M3CCcw4hI+Eo44AUE7VsBHFGPrINL4LghmHJKmHmhqaH84K4Ku2g5SJiCaWPSEFv\n+SCa+qSkWOyhRXeYb6naUdWvubDfKM7V9pl8yXN9L73gY3shsLpxPhc5LxhZ6u0pWmwjyAhH9D7K\n8gqWuseZn3+GlcEd1P2rMesiGEVvmTINSNqjCleeaefZk1xrqpyxnZWZmUdOF0h+44t5JhFW9Vll\nowutckHnHpec85jzAZ9Pgsvw5IjUqI9sxYdGlc0PLaTWJlnCiqXTWe21Gfu1Y3n2vNbu9ufa+Z9L\nG5w5rpglBoMecCan8tWQodl47k9faHJ/o+1/LS12wRx8743oEpKP0HLreerJRymeOcDxmcOcmD7G\n/J9/lO9804+QT+2iVq+jAmlQUsYu3nu8y3HOna4xOlMXFSnLLrW8QRkV7zMQmJq+fJhYPmObfqP2\nsIhU+QBLqBpJ4/P7ztr3p1VWqnI3ppXJhwG+cvCtMt3I28O5rlYhn03ateblGSGvgh3Y+ffFMxrD\nn0XAM/VbgoijVqucaefWbhRnrmXlc50dRDiX2fhcidFzXa9nr9Pa92ey+HpOf+656s7W1qc9n9qy\nC0aW/qAg1NeDGXWpceX6bSymcV5+6xWoXUntbeuqymPALODN8NmZyEYsB/TKkmazSSU0q465I8/q\nlEWPkLfWVOPWh7ueO++ifC2ca7HNUZVXoFVM357/Yq09ZuYQVZIIblg+r5owqyqTUywIIUc1EVOf\nWt6q5jOsNMAc4hXwp0mT+gsMtKBeb7Iyc5D2+quY3/9ZWusvod64HKwqbLRyEbLmaTvD1JGkCmMj\nVUm/WRXiVnogOc4NEAtgVZFmNfE0vCDPIqaUmGbVOLWqXVslvIiRLMOIeAuYJECHuZJq7asrtFoK\nZKhVtx0IDrN51EbBHGV5lDzfQFI/1PaOWI0eHRZ0GoaJnZaWavxxeDuDks5RFb2KC0aWvD5emSCW\nsTB/kEvX3UR741VVoE8Ep+VwF82A1UWooKqErEaW158VahQ8gpGT1aodzp3O9sH5wqbPR7jPqj06\n+9vDf46l3goqrgqunmcXPSfhhm2rCKrdYYUAa/wqJcvqwzxGIIRWRS6pnH7B6A0WyfMM71uoVnkZ\n3xinruCdMLbhWix1mdpxMynFKsKW4Ejxp2xs/hDLS09g1mB8fCvqEnHxKUJ7G8G3WK1dK1OXpf0f\nZ3LHiynZhM+qKGYVHvZoOYeEDYgM67CG9+FoPAZe6Czupdm+maSJ4HOMhGkgFscJoUGyxyl1hLq/\nGnVzYCNV6RK+CiFLZQL2Bw+ThWmc24Azq6oarMSHSbS/F1xB8gWO3QRdJBIw3yWUQumFYDlYl+TX\nkbkRNB1GbYEgV4HOfpV8rOKC+SwhNHAuAzF2jX0/ztUIKaCacCmQ1KPD8gsZmgdr4/hr358WQFYL\nDiGZkoalLVVFakK1PCcZ1poxaoZqHIah19rzZ5sWp49bohh0KHsFx1bm8Lr6/YhqWflHw3ZSimf1\nddbYRVg4/iXKJz9wOudQ9THc4U8nJnXoM1Rmh+FAQmWeujpqcXhtHCIOX1koJBsgvlZpA1aqMh6X\nmGhcj8czNnoV46Pb0dSn358jNKYI1sAosdTBzMhCk6nL38pcugPnP4srO0QAHVQaRx8fZtwjakU1\nPi1w6WkGRcHI6G14Z1XU0yLEBSStkNUmIYzi5SXU2E6Ke5A0gpcAcY7C/va0qevEU6tdR5ARND2F\naBPiYbydQtLjON8BfwyXFEl7iXIS/AZEN5LcEl13N86NgTRBGtCfx1mfFf8IkWdYyD5wXpm9gA/Z\nE7Cq1smFatd03sh9HUhkq5W5cqYqtyqWWxNSXePUrc2ROAFvghMDMQb9OVDFdO6cDnCFodOvK0j/\nIYrevqpOy4Y+gsXh+UNB1RJBETGWlw+x1I9sXjeCsYAJCEovPT6sZTKMPt75Kq1oa0l6hiyjm15F\nuPRNiIVqrjiiDGuxREEMwSESTmsHGc5xbHQLSFVnttpu0oKUQElDAZUqwOfGQIU5/+fMlB9DJVZl\n/M7wvkmrsZ5QW4f6EkuL1Z2UpCqGrSWT2W14eQ24UYItYsziUMqsh1DgXD4cW8L0Sfq1BTJRot5B\nXx9BJUIsKPxjkD6FK07iLRG1hzqPy7bR8/dRxHlIQs4caBcr91W5qbRMtQvMUPIg4vcgbhThGtRd\ni+pGymyZE/olojhK/gJHpPQDWnYJZs/Q5RhoQGtjpDDCqL0dkc24WD+vxF7gJ1Iqpxb2klJVYmI2\nvCnIGWXRA0mkVJWOmyXEpLI9T5On2tmrfMzZu/Wg7AIeVaHemEBtGQkTp53ZyqE7E9WpHOEIUofa\nRnx+WZUpXK28OH2uA4mIZHSKxzAV2iPbeDofUN8sJF1EyzlSOSCzEUqFyBwr9mWs9zmK3pPE1Dut\n7Va1jZkRLJLlI5iL1bWQDk7vROMRUgLMKNMMOizyrBzaMDRR/ekolPfD0hN1IMcAQTUiRMzA4lMo\nTzFe3krTBbzF02ZONWEFBpTlPMokhHG8BIp4FJEBC/FhVAOJz/CU/QyPyx9DnMWlm6ukqgloiYkg\n/mpq6dX0skdZ9idQ61amjkYWisdZzptotkgRF3FSktwTmAvU4y5EBlhWx+wyzJo4N4IUJehREvdB\nNoYEwXEbSBtChkgdH7YQBw021n+CPI2BvBgXcuqyGy83kmKHlr+SDMeSvoeon8frPD2+xIS95bzS\neuGSkhSYQSPPCS7gpYaT6gbdcrAXZzMkm4c0g5Ul0RbQdAItZ0hWZdZxAmmWpCvosCLXoiEKeWhW\nN5QxD+YYcAr0AM4MI1XVAjFVpNFELPaSymMoEXN1eukxTFdIWmLlSnUzFgkTsHIfqXgS7+t0iy/y\nyGPvY9el07ziql/Ay0a8n0ZCkyzbQm9+HxRd2jaN0idrXoJIUe3S5vFBhmaboLYMKlhZVHmX5Fgo\nBGEGYZbD/BLL9gzCcdROoRQkfRTVJTSdwtLfI5oYlAdIAkvyAYq0D3ElXqr7QAzwsqsq+vSXMOHf\ngUqOodXnaRlTxcqSetiA2X2oVq5vyDYSqRFsG2qOno3Q8XA83s+K30fmAyQPZZeBHaaIT9JLX8J0\nLz5eQxxMULetYBkH8/8MOs6K3g3lRsTVwNXwdgWkObrZH+HYSLIPImkMswcxaYJb4aj/GGgiK8Hk\nICX3MyPvRsolTBeA9dTrr6QsHkezaWrliyCuEMkQc7hwJdH2EX2HEfl+PNsoeZqaXk30Y+eV2QtG\nFlEYxL3M5O8lpQ5mA1KcZbH/eRIn6acjJH0YmONQ+BlIxzE5indjSNqHxXthMGDRfwblK1jq0C8e\nwvyAjtyHphV65ZdA5tF0kMxKevJZevIn9NL7IXUxO0qyOcSEzF2GygIDvY++3cOM3k2MJ3BR6MtD\niPZB53Bpnp6cpMsi+5c+wOHjDmmMMblpMy50UUoEwXmHiGds6mp8fQunun/Fid4Rlu3PkJQwEuIS\nagGzDqqJrv11pWG9Eu0QK+5TTOabMHc1psJk+kE6/hMs+s9huh6fDqBulFI/gsohChyl9emnp0kM\naMSryeUazDKWi48yO3gPjj4WcrxcWhmAKWG6gqkDUVQCpl2QASUDXNiNkCrT0OUE8+T1T9Dx76bv\nHmNr8WNcZTdSS+ND0u8DZsE9juM4uBlm48NEd4haaHPYfx7TPlvTD7O+8RZOpPvph09jPIaUh3HM\nMJD9rNhhjAfJ+F6wJVJI9OUzlHKCzf6n8e5mygA+Xk1IDabSd1Fko5idYBD/BtGSkF9elRrJKSIn\n8SmBlDhXx+kOpOhhrKNMozhpEvQYLp48r8xewHKX+1jWB1mXfowo+zlW/A5d/1ek/ACH+DOSOaLN\nE8MsI7adqEfBNtC1LyOyBZE63fBx8vQyNPWI7MFnxynjIfK4grg+wY+CPknp9lCGU9R5M3m8hUUd\nEMu7SS5Q9B5kxf4rJsuYNnni0F3UbSdqh/FhnCIsEGWKggNgUwzYR822kfsd7Bp9C37dHQzad7Ju\ndAqRETrFXsr+V0hxafgQDK1MxfwGNozXWVaPZW1W0p0sFScgnsClR/HSQ+hXflxaQSXQLq9DUwtB\nGcjTOPOMsItx2wFETLbgdYLcvRW0QbDL8C5n/uE9ZBYQv73SoOUKNX8LY9mtDNLDVUjVTuGsC6GN\nkzYmaRjtC2ARY4DDI6mLWcIk4WKJpQFZXE/fjtCkgXEckTGWOk8wK39KdCucck8g2sbZOMK11MON\nHC/vpZu+QpaOk3yPFbmPmPbwUnk3KpOYHAV/kIHbSz3tZFJ+HacFybrgrgc7zAoHydxVqAtIXCBL\nOfhputnjDOgS9AhOpsjdNM5yLM1S2PsR9xSFNFEW6ejfk+JXMFsg5u9nQR6mLhtwcjlkrzjzsI9z\n4IIVUi6UH6ejyqTfiOkSexb+gJdM/htKVli0u9ioP4Z3gS4nifJJPJeS6zhJEqV1GchRxrXLEgdp\nyi04drAsD1F3gRi34F3EpwLNpmj/h6u+5XO8iG8e9JciS/opRuQVqDwGuhuxDtUtzn1wJ+m6R6nb\nPyfoPKXchedmnO1DaJLwVRrIjZLkIfJyJ4P8CyQT6rYFsetAM4wvg7san1/CP6qbvzqyl1Ff46h+\nhkRkJR6toj/pBJv8T2JuD5YuI2Oeehhnb/EEzXAH68t/wSD/CkmERW1xyvYx6R0tewaTUTJ5Ocp+\n5vxHyMyxQf/ZhZriRXyTENMibb69yrml7ag9Q5AxkDGilAQSPWao+wLTp/G6GZGDJGZxtkhpPTQU\nNNLlaDhGERQ0I7cdlH6UYKdwbgdWXsoCnznvOC6YGdYvT3EwPoALYzT8JrwLZDHn4cN/RFRDrUZf\nHsLcNC7tYKreYJ28gRn/BOYSGdtohMhl/h08le5kiXnEtzlevpsF+QLOtnCsN2Be77pQU7yIbxKS\nzKMux4gEqYN/ktI/xcCdpLAP05NnaOi34eIRJG0FOYaTZhUhs2kavIJOuYzRJ+P7CPJavL2OAWNE\n9lXPPKOGMMHx8KXzjuOCaZYpt46VaMzaE0zrq9g8voOZtMTO7bs4Wd7OuuzVZHYlz/Q+zqVhB1kI\niNbZ7KdZKbdQo43319PxjzPa3Q2tAZku00jfTyaLdHWGl9R/CnlWTdTBf/v7bKq9hZn4PqbkZRRp\nCcl2Ukt7OB4eZSxdQ9fvpW0/yEq6i8nsVdzXfyfXZR+hrx+n7V6FWkD9J9ByHXPyAH/6x5OMj72U\nk8cOsH7bRj74//w9N7zsRnbu2E5n0GVQRAyhVsvwWc70SBsrjSf27OG+++5icWWRS664Gs27/OL/\nejUTm2DUPYLQJKZ/St0dQOImluUA3XAHrXgdp/yd1IrttOuj5Obpo7TiKxmEPbQG1zFwfeq2TC8/\nzIPx97gm/C+0mKQoJ6iFazkl/4l6mqbLMwRfox5/hLaNghsDq+4ksCynjL/AjHZZiCdZl9/ElL2N\nrv1nkmsxYj+BOGNJPsaI3cyy/1sGIvSSpyVbaVjGbPk4Wb3FxvJNzPm/Y0pfj9k4sB/FM5/937T1\n1Ti9nIEYpvvxrKP969eeXrNca5g3kjN00CfP3kyKDxDdQQijMLgKsg+C/hQr+R/ixONtkWBtUhhQ\nWoGTQ0SuJMQHGdgRDtXu4jJuRwYrmDOsvI+jtb+ublk+Dy6Yz/IPx7+bmyb/A0v+bgo7zAb3L7HS\nsyKfHJZ6zRPFeHphkRe13shy2oevP0ojvp6T/j209VWYD7hiM1NsYDH/PJndwnJxBLMZFrqJLe1F\nHu8+wit/572n+5796TsZG3kxSXpkNk7X7gCpMZ+eZhNXs2In6PgDBO+ZKL+LGD6K6c3kuh7cfkQX\nWQgdmvYifv+/38H80kb63R5j4y0O7D/MxPgkm7Zu4sTMKcqiJM9yBMOiEF2i3WpjLiMTR6NeIzhB\nHbRrdXwA1UVe8qI5Nm8sWFd/HRoPoqJ0/b3UrORn/80essxwTmm6Ot3UY/t1h3jn236MRfYyLT9E\nsgNkrMPrCEjkhH8vo/pmMplCYpeBfxixKVJoMy9/SdPW0dBd5NzG/NLn8e0v01t8IyPjLbx7lLpd\nyYFyDxqbpEW3AAAgAElEQVQ+x674G3Tqn6Ed30ApRxFdj7enOZi/j43xB8i5kjI9ReZ6HAr3sTG9\nmRn3fjKZZcS+izyNoUzgrMBsFPULYEfIbQs99yjBrsV0ltpv7D69ZvHfH8C5UVIZca4OWZ0UP4Dp\nVnr5MUgjOH0Zd5c/yquzn6XwfWbkTkZkI7NWsjPtINouoii5CJb2Et1l9MJnGE3fR5IHMBlQEDlu\n93FFePc/Lp9FWUBlEaddRuxW9sb3cGX2M3jtkclNHLE/w4pxNtemmeHz1P1WxvS7mXUfIzHGwXQH\nG93N+LCHxbJPXW/E6wg1X6OgxsjYPDV7I+tHD53Vbxp5lGibWLG7MVHGeSWl1Niol7FQ+0Pq5VvI\nXMFYfCnGYZbZSCYr+HA/YjfiZMAHPv9pDu0pifMNpjcHBl04efwZ0qDL/qdPcOToITSWOJ8Rk6Im\nmCpBHM7X8c4NywwcQTKcM7JGTqPZYKxdZ35hmhAyavWnyfOSkbDE1ddsZ9OWeU7OL5DVlRp15kIP\nlzytxwty3UnmDgOL5EwTrRg+etazXr8flcTAjlELl1PXG1Ef8Npni74JrIb4nZgLTLR2s6xdJkcP\n0GOEQmd4Wh7kKv8OjGuwfIkirdDnTh7r/59cG/4Vha+zefB2Fn2XiTRA8gYL9mUaVmLpDkYZpyc1\nTvL3bLUfwfwSA56gnq5CykS3NoOL63EywJdP47Ktz5KWx9kvf8ETqcdt/n9jYJG6vJYifBGXDlC6\naxnRFW5tvJcBnyPYLibtxZgWNGWBwkeW5IuMaRuJV+LkWmbCH1DXG3hUfo3d+kOITuPdI2yUm88r\ns1+TLCLyh8AbgJNmdu3w2CTwl8B2YD/wfWa2MPzs3wH/nCqt/lNm9olztfuS2n/EcTd12cJdi+/j\n+rGX8MzyjxPaW9mQLmdSruMAH2dUX0d099D180xqjgLbeRPIMhZfTubarPhPk+QEwR+nl56msAUa\nNs6i3MFW+2HgN07327BRluOjtMN6Ch0n+i5BDxLSFgoWWOw9wPrm5Zya/xKHjo5wYOUeJlqX49Ic\nW8Y7/Le/fQBfXkeWeeZ1iZVnehQ2TypKgrSZWDeNw5NnGf1+QVmU1BotJiZHqNVr9Pp9AJq1Jr4R\nqrqtVFDzGYMiMT+/wMmjp1AnjE9MsmnzOvo2ytK9Nfp3jHDb63+Mffd+jq/c80myuiJBSbMtjrsP\nsUlfTyH3kKWXkvwpenIP6hdopBuo61Vk7gRd+QO8XIm3nZjuA7eOZ/gLtvAGQtkguQ4jtpuBW6Rp\n+yh9k53pdUS5B2MTPm3EUwM5zETYiIkjK+sMwtOU8jTCVYQI42k3C9k+vN+Gj026tS/SokHi71Hb\nxpK/j0bcCtKllrZwNPwmLbuCei0nS6fOkhWTfYzaS3hls48vWqi7hygd1JZoun/NUbmdlN2H18tp\nM0m0o1UOyYQJ5znOHjak78HLOrrZFwm6nzHbTinHuMzewCn/fpJ3jNtVzNP5+skC/BHwX4A/WXPs\n54FPmtlvSvWbkT8P/LyI7KZ6av5uqh8w+pSIXGFm+uxGl+33iekmRqREi7vox2uYbL0ORw66BGJc\n6f8FjxW/xeXyP5GXkwx8wYb0ag7bR5n0l9LkCUrpk9n1EB9EZSfTfhns2+n5u+nG/ZxMS2f3m/rM\nhg8x5W5k2l7OjHyQhm2mXn+CFJtsb72BX/ntP6G/Mkmvd4TDp4RBZy/ic3q9PyI32HV1nVpjjFP7\nn0Zyo13PSclD08iAUA/kec7kxg3kebN6rrNGlIJGrc7Cwgz9lS5kDiuqp7+EVpPRkTYbNm2gWdtC\nnmfU8gbLiyvMLJygNTGFFpFmrc7V/+T17LzhtcweeIRPfPh3sHabje47cW4HEsc4mn2YSdvAqN1E\ntAyxe+n4PdTKKwnZK8jZRE8eQqRLsjp12cYh9wm8a7DJXsYJ/ooN6dXUbT1Od7Lie4zrq4AOXfcI\nTbYR2cBWN0miz/2d3+MlU9/LqG7HfAdJAtklTPAyzBbQfARNf0fiBAuyk2m5lSnt0XNPomEvtfhW\nNsuPssTHGchxci49a832yKe5Qm6jll5DcjWSbmPJP0JDDOPPqOuL8G4P0T1M5A2I1sntICZLzPI4\nCxJYz/1YnKLMZihknpHyDRTZLPAUczLHmK0nyRJjXMv58DXJYmZfEJEdzzr8JuCW4ev/Bnx2SJg3\nA++z6jHy+0XkKaqfzfuqkFQzbKb0e5iXq7l5+mfJ7XuJ8hSDtIB5xaeI8wfI41YKPUbm11PYoxR+\nmc3l9ZxyHyOz19Kwmyj9vXSsS91O0UsnGbUdNLmSpjRQmTt7PjLDtNvFFD/AIu8l02miX+Gw28sm\n+UHufeIzHD10FJ8r+CaaGuShoLe4jK+1uGTXBubmjyKnZtBMaNdyOt1O9YT/3jLOjLKoE+s5i0sL\nSDaClAVlOaCRN5gYH2N6aiNiShTIncdpYGFpjrb2OHRwlno9J69Vj3clc2zdehknjx7kiWf2k8wx\nPdliy7ZL2HzFbt75M/8XM8eeIKUePfcZ1D/MSHw5/fxJSv0HlmUfm3gHIb2IkI+ykj5A0sco/XqQ\nk9T0JXj/OUatwSwHGViOd0bPH8bJ95OVC5h8CbMuC+EY8/IpWjRpljcjYR31cjNXTH0Px/RT7JBf\nRwb30skLcrkZ0RMU2qdn9zHt30bSY3gaGP8AuhuYo7p3pYPFU8xLn+3uJtTtP2vNdrk3Uo+3cpQ/\nYaNtpeavJdgyR9xjXGrX0XN9tukboJyhzDZziN9lm7wUnzYxLS3G3AK4KYRXMZLmcG7A0eweRiRQ\ni6+h5R5gvfwSGu9lOax8/WQ5DzaY2Ynh6xPAhuHrzc8ixnl/Iu+Zo49w/eWvYraY4lD8NOtq1xFS\nh7rsYsXfRRF7iG1kzG0j6XFMp6jzbRD6LLkZZnpztBqXkLkv4PUmRniYUh5hmYIRdw2z+sdsTNdy\nOP/rs/ptOChdg0OD32CepwjBsT2+lVp+HFec4Pf+yx2cXCjpLs8SXODSa6/l8T1PsHHTVmLqEPvK\n4qkO3d4BFlcWuPalr0AUuouzaGjRXV7E501atToqQt5YoVlr02rXMYFaPTDRbhNqDu8DURMplTRH\nN9Gs50xtyfA+UE+Omfk55ufnueeuLzO5aZzrX/YSJifGOHZ0lofuv48kNZa782zdvIk/fn/g8p3K\nTTeMkUnOMzzEVnkD/VhHZJKBPoKUbRbDPsbdehq2DWyJQJvjxZPkNsZUNsmiO8Q8Xab0ZvLwCZ7O\nPswl9uMclQ9Rlxrb9HtASgrp0KPDMkdx9hDreDs9+Rvy7HU84v8lN5QTlFyNyDFGw60EeR8n3D42\n6y+CXQXhQWpSIOm1FNkXKbifXnqMwwzYnm47e810HQO5m63uJ0nuc9TsLsTWMSYL9E3YnH4A7LOc\ncB9lveZc4m6mzynKbBHRyKKdZKv7PtASJ+O4mLFR4WDj1xjN38rWwTuAQ/SzA2S6fF6h/4YdfDMz\nee5f8jrnZ5dPvINiMM2hg+/i0i0/R+ycRJvGcfs5wuJWRkdejbpTGAeYcD/EqeJDtBs7CeUOnCjT\njeshfpYl6TMer2A5W8LbVmpunp58BE+bBTnGhF1zVr89v8T68gYa+d1skDei8VoszLGOH8fnG+jp\nPWQuZ8PWcY6cOMgDX/oCrdEplleOYRZYmJ+nXs/w0bN921ZqXoi1Bt5B7A4YiAONqA5oNsehrO4j\ncVmNZqtOkTo8+sTT1c9PpOohFHkGzVaTVnOa6Q1tmrUxTi2v0I096mMNrmhfyrot61icmeeuu+5n\n09Zt3PBtr6Idahw+dJInntrLxz/8MT7kO/zNi2/k7W+rc/WWaQpdYFP2fdgAnD+J95uYLx6iHW6k\nIT16PuGZZAc3kaxOM12G0y9gtoh6w4optsq/InAZm+Rfc4JfYMBLaaYaI9xCX75IGR6hHV8J4TqO\nFv+AZP/AtF1BxBNcH9HqFulYfgfBP86c/DajegvL6UEm+V6K8BFWOERTX86L0ivpuZPMhq+cLaQ6\ngdhukg3wxbXg2hzP7oG0TJ1L6bsPYXKQll5KV3oY93GUeTanV7Lg76GQHsfsL9gs78TiE6gEJB9l\nR/xlZuULTLnrKThMK72c5fD0eQX56yXLCRHZaGbHRWQTsFp99uyfyNs6PPZVeNcvvoeRqY2UkviO\nmx7mNd/5NmbirzPl/wnN5q2UuodkIwR/Bfvcf6VR94zoO+m5Zxhxt1FjD7nbSZ/H0BCBSFNeRbQT\nLDLLRnkNhT/CQbnzrH6n7Hs44f6GifhtzLj3s56rKeQpBrrEJ+/4fSab63j4+BH01BF8bRKpO0Je\nMuhF8rxOt+wwWF5m564rWTp5kjL2iTFRFgmNQq2Z0+/0SVJpm04QFlbm8KFGv7NEZ7FHPctwecbY\n+DitWo253oBgMDJxksOHPO3RSdZPjDE+uZ7166fwBvv3H+PU4gKNeotBuUyzn9FxkV1XbGbD5lGW\nl7oMepFHH3uMd//cV8jqwo/+6A5ed8MWonuGOi/GtMVL/btYdHdwtPzvbMx+BG8lKe7E+QWCbOdU\n8btM1V9Hx+7Du1vwmrHMAZCHOL7yNFPNgEnE/Ck0XU+bqziU/RpTaYJN7vUsp0Tml5j1H2aD/hBL\n4UtMxRP0PEzad9CXyIz/IBviT1Jkib4ZHdmHuQznlVlOsDn9NJVVXyHZOMfCn9GSy9mnf8l09j/T\nSK9gxT1G0MPgCnIyCjnJdHwlpt/NSP5xTtjnqaVNtGWUCb+OOftPNPLLqacr0dTFhyto2gE++dnf\n5hNf/CRNexnz8oXzCv3zyrMMfZa/WxMN+01g1sz+dxH5eWDczFYd/D+n8lO2AJ8CLrNndSIi9tAD\nv8uVV72VothPo7EbFSVYj+Od/4NW42o6NmA0vJjFwd/hG1OICq14Iw1/OX07QuIYSXM0eupZC+em\nEecJOo5Ik+RWEGsyt3gvG37n2073feqXP0JKy+S6hdLtYVKvR9zLefDR3+L33jvD44ePMHtoL6ks\naU9vYHRkO4NiQInDnKOV1SEztlyyi6AFS8sDYoxAjveRohhU988P77/JM0+/SBSD6sk0o60RJsan\nmZ6YpNkcJUYY6IDYL0lAs9kkq9XJykRpRiIxtX6C3buvoOgY80sd8kbGocMzDHo9ypRoj7WZnqoe\nl7q8uMjhw7PML8+xtHgCpOQtb97KP/2uHZyQ9zFpN1CXHRzRO7nEfpxZ+TwTvIYV/UMKGVB3l3CK\nO9nEKA19B3CUriRyOcwx+Rwr5UauyP4tA90DdpS5Tpct7ZtAp5nlflw4QGHPoLLAhP0wPjWp2Szm\nrgM5wIB7WHHz9KVgo07g422UMkLy97Mkn6ZDQZRlrvrV3zu9ZrO/9AHm7BOM29uY9Y+wkVuZKT/I\nurDEsl7GpIcOAyaKN5L8KUTncH4DqgUz/sOM6A/Skgk68kHytIuu309NpvHlVThZoJQaffcVmnoF\np9zn2eZ/6evLs4jI+6ic+WkROQT8MlUs9q9E5J0MQ8cAZvaoiPwV8CgQgf+XuTeNsuy66jx/55w7\nvXmIOTIiM1I5p1IppSRrNJY8zzbYgAe6TNGYpgFTUNSqhi6GohvMUFBVXVS5qO5qypgCg91gW3iS\nbMmSJSNbUyqlTKVyiox5jje/d+d7Tn8IWSaE1V+qe2Wfj2+9tfZ6d+/99jl7n/v//ewrE+W769jJ\nn8LEKTEl8tIljedQ1h5q+Z+hnzzEkPVuMvM1tO2i6JI3N6CUwhdXMfRxGEbKKTK7gSUP40cvUlA1\nfLUIJkOmgq56glZxYZfd1eAvOWp/nGb2VUbFGxDqMCvrT3L50SXWOgGYHIXqFH5nHcsU6HU2kNLG\nLVUQwibym6hcDpF2aLTb6FRisBAEhLEhjRJyrkemwbYgjSFJEiSKaqnAgf1HKFXqOPkScRAihaAg\nK0jbIu+6FHJ5nLyDcFyUTsky8AOf+cVtCpUcY3tKiABaRTh4aD/tbZ9Wv0ur1aVaK2G7OQ4cnGJ1\nM0+xWACR8vijAfd95Vk+9pNvQJ88A8kzDFlvoyEfZNi8i3n5u8yIf4rO1hHao8QbQFkE4hGkqeBk\nNzGrP0HVPcr18gNkaR8pyljyOOXifaCHQQTU5c2Y7BAKjRGaRL+IttaI9Bux+AoqvQllV/FMG5+Y\nVEyQWZoV+V8Z0kfBTDKBwzyP7PKZwiUSKQ6wn/di62GkeC0JTzPB7QTZKhWxyKrzt4yKUTB5yOpo\neZGyeT+O3iBVSyhdx4gmjr4Ox6RE6hmMOYaQawRmnZ6ZZyL5SeDXv38uXKsJfhon9INFhBdA6iJ0\nB8vWCNmgH4/gOHU8crTVN7FMQqY9ynoKpcZZ1Z8iJ4YZFv8Yoy+SCRvBJFq8gDF5YnEeh0MEpoHS\nU5R/79TLtk//wo9xovjLdLN1yrJISJs/+1d/jFMe4U//5iIitwcha0TBOlnax6KI65Xp97bxigWE\nEWTGwsslL4lGVLCUwRgbbStEuqOIIpQCk5EkMcoqUK9W2TM+QaFQIUtTJDYjE+NoI/E8G21S+oMB\n/sBHSgvXcXCtHL1uh+evXibpt4mCgJHJUW6/7S5uPHGYtcUmnX5MfaTE0uIKKYZCPo/rOSRxhh8E\ntNsdsijCLZXI5wVdf5Ff+O9nkPl1avpuDD6huB9bfgjSVZRpINTtaGMh8REij6AH6QKZDJDqGCbp\nIUWZgfUXuHoGYW5DiQroi2h5hJjPIsRtKDNLVzwF5iAFUySwtmkyj8ckljEMeA5pXNLUYOyIvfrn\nWZOfp2bupPrbb3rZZ91f+wo5Y6PlnWDAStdAJwglMVoS2gtY2DTN5ymoEwz05Z0zFR2K5u001BcZ\nxifSe8jpaWJ1GSkUSaYocJBMDOjIr+Gk+/HkCK71lu9bWa7ZRcqnLv4OtqpimwmwVgmcBil5Ar6N\nY3kouiwkf46XzYDJY4eHwKqzxmcpiL0obRPLB+hxEUydjB7SHEBZxxkkCZlIcc0YNrvfqT5Z+edo\n0aeoaqRC863HH8FVeTqdJkEWkwUtjOhjZxLXrpGv1hn4m9j5PFkcUChVMWKH+eEIG0hQEjJhsNIY\naWssS6K0IfEjXJVjpD5EtVJmEPZYW1kkMRGx8Zmfu8TS4mXOnTvNc8+e4eKFc1y8cpZz507zwoun\nuXj1WUanSrz/He9iamgvOSvH+vIa933ur/nzz3yaK0sXmd5XotPsUq1VOLR/hrxt02/3iYKAeqVO\nsVBCC4elhXkuvbhEGg/xnz/ZJe1kDPRVUiIK5sMk0VUsU0eY/Qhh0GqVfrQFWQOTLaLVMYQ4QWoe\nR1gOWiyRSz7CtngCjUNL/i7b1pMo/S3c7BQd8QVkWqecvR4lLZqqRZ8G851H6Zqn6ZhlHHOUBlep\nq1PsMe/G5wlC1ihwbJfP8lmdS+pBuuLPkdpBiFXa3kNoEtASN/Po8wwlc5xCNsVY+ku45nZckWIb\nm2F9mAYrSHEDWh7CcAN9vUbRWKyr/4uUM2idMW9/iSXxzKvG7LVDTkys0uR+Rq13cHHzLGPDEwz0\nLLOX9rFn4gJ2MUekYiLxFKHQ2LpAPruOnr/OsPsG0rhHYNYoWz/IYvuTSEczlbuDNJ6j5pwg1Rto\nqQjY3Vkx6TiOHCLkAXrZJea+vYhlbLbXtzAD6LJO2VhIYWF5VTARrm0jRUqn51Md9bBEQJKGO7hE\nE+P7IY7nkqk8IsuQShFHEcVChfJwDVtZ9Pt9xurDFIeqRElEs9lEYsgXypQKw6Q5jY5ToMFmd4Mo\nCul0Oqyvr1Me2sOpUzdzW/F2vnTfF9jurPPi6bPMX1lgfn6OPWN7ec3dt7J2dYteFDMxNUm71WRr\nu4EUhpMnjnDmhZA4lnQ3NtGlMl956Hre9Np1psduJjJ/ytPPPsXR4z/HUPkglpHYusTZ9js5UP1l\nUqvJkLwetMKIEkn2NEr16chvM64/zGr8SSbtHyG2ErbVZxnSH6GkT9B3ztDJFsF0mEp+g9A9x2ty\nH2STv0DqhFHrPYzxTmRUR6gEi21cs42hvMtn2rrMjDmJlC6ZlizYn6OoJ3la/BLHvfei5d0Uk/dg\nyRcQ4h4Ss4o0m1j6NjbtX8fgU07eQk5NoE2DWF6lZbooe4GGWachFriOo0zqm1nlH4C1X17XLFnq\n4RuJZJuueJ6R0ZsZSYdJZcqBGQ/XyuinL1B3humEZyDNM1H5WUT2IpPW2wn669jeEeLlFbb0XzIw\n6+wdfQdB6CPtKezMx1AjM4KaNbrLbsCLFNJbMHqUJ7/+OEmS0Y8GdDsB5XqNwO/RaqzgFipUnDw4\nDk6+jtY+laEyUdTDyB1asdCaOEuxHYew38XYAYX8CKk2yFyeykgZ32/TzySVXIEsibiyOEfBdXa0\n0RyLQezjhwOKxTHytTIjk1WOHL4eS8PpM8/T7zforK/zjfW/Zc+e63jf+36Ev/7c39JWiwwGfS68\n8ASzV69y9vwLHD52mJOHT+D7ffx+j1M3HWNlYZtme5sDew9Qr1qcec6h3d3kqaefo9Xbzztff5Xr\n9t7OknU/Jf+z1L13Y6wTLDQf5FP/6SI//KGHKQ1HiHydqnsPjrnzJfGM88T2N+izQDmXh3gUk6yy\nZJ9jSE5hJz6h1aPKKYzoYtQV3GyGyATs1x8glDbKjCC0g7G7yKxBpjQT6Q+j1XO7fLYiL+HomGF9\nO5H4FNPm4xjzAoetGk09j9F/iS+nGdMuJllh0/0Kw/ExHI7hqWkq5mNkeo5A3IeQxykxhBEHqSU/\nhCMfwwiPRf17VK27mTT/DRP8/69WeXSeAu9G6AkCvcEK32EzfoQx63YGMmZUvYWFlT/By45T2n+a\nKLyMJQ2OWyLnHSFmhOK+CqkqM5ws7+ggO8MY08XXAzbDRygUpomzK7vspnKJvllF6JNsLm0BMWG4\nyWgtz5WVTTqJBSam115BkVIsHyIVEUkUoJTEFgmO5ZIIdkQghCbzQ2IJOS0I/BZgUayU2VxbIjMx\nrlcjzXk8++Lz1GoFAj9H3q0gfIW0Ja7tkcZtlClAbIEDTiHPG990L7Vyha89+BDrG8ssLM7xtajH\nPa/7Afygz+OPfZNBYpMMOmwMevhBj+XFJVZXlzh+/SmqtSFs16Ker7G8tEI+N8YttxxnabHG3MIs\nFy9cYnstx8d+fj/XH8qjIocIxXbwCPuq07z77T+JK0Ki7RgxoUisDfzsCQr2naTxZVw5RkN+ma3u\nKkHvL7hu/D0c5cNkyTpITSaep6ynaJkekuvRokZZNDGijiccNBFCb+OLR8jJGMucRFiTCF3d5bOR\n7G2kYhWRlPBEGZWdY0M+TNHMMK4/QN96iGp2mAX7i1TYT90MYVs30JB/hTYjiOwMyjqBokdHv8Cw\nvoOaPE5iniBn7iLJrmJZVWrp3TTU+VeN2Wt2wI+SLa4unua6fdfTjL5GR0eUVBGljxAkl5k7Zxgr\nx+zbcwcmfw7PuZdUz+68q8ApMhNhiRYZeSxRRGufNJujnVwi56a4HGApe5hgI+bkp37zZdtn/9m7\nOep+ji9++V+wcX6OxqCNSlIuzy8Tao/zcxFKDtGPAvzONlMH78TIHHG0ietYWMpDWlUEIQKfOLHQ\nRu4AjuIAk/OwtUWYdhGpxNgKgUsadXGLVSr2CPVqjSP7r+PUqesoFitcuHSBiT1T3P/Vx7jxxM3k\nqlWCBNIkRTkKpSTDQ8NcvniZrz/8AFMz19Hd2ubNb3krn/nC51Eqot/pgIoZHz/KoLlJz19m6rrb\n+fEPfohuv0+pWGFtfROlDVGmKeQdlq4us9FZJ8Pioz8+zLG9VWyZQ8hDaP0sigMYWScxD6PEKaQp\nEYk5bIp0eBYj2hjWWYsuYzkBRXMTlfhmWt4XyIkZclmOvPgAofgGTnYAm/1gOaBTkA5CKLI0RNBl\nTvwWaZoirBEqss7Eb/2zl33W/rUHCcTXGDHHQR8jkm1SvknB/E9IXBK+gRYhGzzCsJgmFm2K6f+A\nUANSrmDp43Q4T0XfgrZSUtPAMTfsUAwQaPMoHVFFM0vFTOFYd33fA/41S5YwWUfoLpFKicwqTngS\nXz+PI5ew7HFEdoSt1WVGJscp2B5B1kGoVbQukpfHyOQyWgcYYWPiENe5kSi6QixbFNQBWlygInL4\n2Qq13//Ay7YH//M32VyIefSBvyToddjotOmudbBcl5WVFoMo5uxqjBJltE6pjk2AqRHHAxxXkPcK\n2J6H1nkS+phEoxFInaENpDpDZ3onIMxOUOioh5er4OWqvOveN/PRn30PnuPu6Hu9hDMKgxTPewmt\nIXbEvMM44dLFLpeudOhFMblCDiuRfOqz/wdvf+uH+Mr9X+CNb3kLX7//Afy4hY575MoTkKQMuoto\nu8jtt7+Ft7/pjSTpADfv4HcTOs0uVs7DtV0sfB59/DTKVrz3rXu59ZYcWzzOqPkBLO3Qt85SyPbv\nMGX0EEaWCbMnOBv/R44W3scWj5Nngu30EiPqVsrhBG37ebAmcAw4TJDJyyTGo2QOI7DJ6xNoWcSk\nV8hUD0PAuvksRXkLytgoDlL5+Peuyq/+xr9iYFYpU2Mo/QCpEGjrCmRjSDnA0gmpsXHMQXrWp/D4\nIEavIigQiVk89iKzUZDDIBy06aEooEWAJI/OniGyruKa96MluDL3/6/3WdK0SZL6aLeF7w+Y7/0q\nOS9jX/7X0DxIkij2XfcGouzbdKxv0MRlKL11h4JLiNJVUusqJp7GdY+QJBFKjSPSbbazb+JKRWq9\nEU/uVusIaPPAlz9JGFlYWlBzy2wnbfpph1rVItiMuW5I0Op2WO9Bv9WiVK2BNighyLIEQgvphZjM\nIst6kFnEZEgFShukstBSkyQpJojxKsN4hQJ76/u49+3H2FjdxivYeI6DUjaOY2FZNtrsAI12VCYl\nnm1z8mSdkydGaGwP+PwDV0mVw09+5GM8+NA3sWzFV750H29/x7t47JuPst0OiYIu1dIEfs9CB20u\nXSGDxOEAACAASURBVDiNFpqbbziMG9bJK0PdGVC1VpiY3IORkhs+/DqeODvHw4+scnG9zDvfdQWH\nHyaTFlY6wcCaJRNj5IB18S+JaTFVrJOZBJcZ6vpOavIe/HiexJmnIk+RNyVMlsdXzyGMgxI5AnEG\nbUJMNodJytju9ZhU4Vh3IuVzlNJb2VL3odLdlxkzc5mc8JCmzFX1W0yIn8YxQ/TkfTjsJ5Utcuan\nifSX8U2Vgs5hREAqBHlz5w5k1xpBZD5IiaIK6SxSjhCpzyOsG7Hp0xb/kdDMv2rMXruXv4IyhdIk\nW+vnqA4bqs5HUFYTSwVk7Kdo302cvoBjNEk6jpNcwLWLtMLHSd08W+l5knaBA/k6wskhZIi0irjW\nNGt8mil+kVR8GSfZfTes0TyDMHk6vVXKKkevPcB2BKnvcmV5g0rdQ0QujhMxVNKsb6+QL06Rpik6\ntbFcFy0UUmtM3EW8xDlMUahMI3I5Uj9DCBvXLmFXq5gswzM5/tff/mlKhSKZ1DhYO+yZOCaOs5eg\nrilSgm3bWJb9Mi4QoDac46d+7Hp+5Tc+x5zr8AN33MXW1kHu+/qnuf9rX+Wuu1/HM08mBGEXN+eg\nhETbLr7fYnt9lTNBn4+8aYYj11mYfIUk8FBeTNDaJF+b5M135Hj/227nudNL3Pfp9/ITH7yIJe/A\n5iYccSNS72CYZvRvYawt1uMnWExOY9QWE9YbUdSYSz/PuHMrQgb4aYDLbfS7a5j+jbQ3trEnp1H1\npxgMmnS4StXbi+MKRpOPMCJfS0M9jCdvomg5u3yW54O09Z+AsAhFF8d0afIsgYAR3ccxb0aaF7HF\nPQzRICLE5jpCTqNJcXQRYS7ji5hcNIJy9pGpHJiUfupSExlKjmDECgWO82rrmiVLvjyCMh3yI5to\nYyHMJsZM0uEzFLNxAnxiZcD0yIs34xNhhIPnTmFpl0PuhxE5i4DvMIhfJM8IW/qPkEoxw70YoBOF\neGr32wG14q3EyTksLVhcX2Nq32G6W8s0WxGJybOxmhDpCCk0tYLF2oreEc/LQoxyyDKNV8wT+Fuk\ncUqxWiQIdmb4UkJ3sI00OXK2h7QEJAFGaE4ePkbOdSgW80ixI3YOkMvlEEK8zJqB3Qr73wWKSiMx\naH79l9/Fz/8vf86nvvhp7nzNXQwP72fp6rNcuXiR+sR+mquz9PsBsbSx8yUq3hD3lGP+8a/dhfQq\ntJ8/gx0PcO08/aBDp7vF8L4bsHvzNNYXOXSkyM0nxjn/okPuyMep2TOE/RolT7MWLnIo/9NoHMrS\no5i7gXL2NmIxhDEvcrTwe0gteP7ib2AvV5HJPLn6CaYO3kx76+vMTO2jv30MJUDrr6Lq2yh9BKwG\nq9FTeNKhYb5BQf7TXT5riU8yLX6Rq/oPmJA30zVdFE2m9W9zhQ8zrg7jMovKFhDZQRw5jRLnEazT\nFwOG5SlCdZ7MDPDtg6Q6oKj2YqdPUlI3QZoSSw8pCmTErxqz12wo2YteoJHdx0Bv4FmaF3tf5onO\n79CMDxGkJ5Bihsgs0A4afHHpB1nofxmhXTZbyxTUO+hkZ5BJj+cHv4Mty2RiAVseI8tSOplPLD7D\npc1HSaOJXXZ9ex6/l9DdDEk0rGzOEZg+E2M5qlKTDCK6bZ9WO2BtaUClqEniGKkUEoNlWwRBjyxN\nSdKMYOCTt9TOAw9b2BTIF8s7XTKjUdJChykf/PF3ML/4AnNXr9Lr+mRZ9jKI5++DdP4+0euVVC4h\nFLmizc2TdXJa8+yTj3L46GHc4hDLcy9im4xcbRxMiiMNeeXx+z91PT/2c2/GmCrbLz5PFHYpjY7T\nXl8nGQwYrR8lExkXnj/LynMPsnj2KXpJi+mhATP8JIP1e6jlRrDsLgVPIcQclt/B0q+hyN30zRUc\n4RLpac587V/y/F//KqWtOo7tMnroegrVaRYuP0Z52mFztcHm/HM8++yXKfWOs/ilwwxLHyGKjDin\ncMVBDuhfopW8oiNlKthJjrK6lUr2WobEOMPph4CAPfI9hOYJJA2+EvwmQg2IxGfI9DoFfTMlOYw0\nMzjmtZSyn2E9+ypV+Rp64l/TkjFCzLIi/5QN9TBrZp6Kfu+rxuy1k28VqwyCKlGgaXdDIuoMZ7dT\nTY/TMg/yYvMTnFs+zaPPfJ5bq2/BSg+xGD5Cxb6DRK7hUGRd/xkz2fuZC/+MTLxAEF3mOwtPM4j2\n8fULjzGUTZKzdvfNxfZxwqRFbWKEMIsJ+z5KZ5TqFrfccZx3vPUmxodKxFqRSEMSaZLeNtLs/MNj\nIrI0JUWQpgGamFhHWEIhZBFLCQZ+gJQOxeIQUdinUiixtvI8QS9hz+QeSqUiUu6wJr8fRm8HT5Ht\nwlFkWUaSJGht+NmPvQdhPFLp8shDX2Z0bB9xNmB+/jJxt8Fgax0hJL/yvhuYuekW7FqV/ux3MH6b\nysgwfquHsBS1WhW/0yJdXyCPZO91x9l35CTd9Tn63WW6zSZVO2ZpVqOTo9jBfhAxzcIakbxKJwxY\nfPZJLnzr33D1a/+eurQZZAM6qw10lrG1MIvfPI8VR/idPMuzZ4gGPdJWl6XlOSpextVnRiG+mTBw\nKKQeiDpD8qZdPttrPk6sYspGoKRNqK+g5U1E2Wlcc4A1ztMx1/Pm/L9GmwHaJEgOoPQ4uexe0AsY\ncZzU6pK3bkWnF7EooYQmEQ2Mkkymb+WY+SiB2D3E/vvrmm3DzsffINeZYaV3lpVwFn/dpV4x6MoZ\n7p3+Ya5snsNKZ7BVg0fPzzJaO8Ta1hMcKKfY0mG98SzCgW+t/z7DxesoDv8c/UhTU3u4fPVviLII\npzDEZnNul92vf+m/UMqN0vO32TtxgNnZWUYtizQSLHW2iE3Mm99wC5cvzrLUznjhygob61tMzIxh\nKQ+EtQPPyXb4L1FvgFWAKHOJdUqpWmbIgJcv8wNH7sZyEl64eJp4kOOtb3o9O0Tk3biMVy4hBJZl\nvQxaVUphWRa+7+MpB2kp3nL39Xz9mQv4vWWcwo1YVokkbREHKYETMykL3PnWWxgsngfTwvNqFEeG\niSKP3uYCxZEJ0ixPq/Ft8qU78MM2NWscEbbJWw5+4hN3AlJdZqbisbUi2TN9O834Eva6A2lG2HwC\nS5cwJiQVAVs9hcZFOHKn0mZ50rBPKlO6vVka3ZBynCCkxebqKlLYDJuIp7/4CW5+94/QMucpsorH\n5K7nIU2KNFPY+qNoZdDaIrAeIq/eRE88gMgCStxNZL6KJ24nkd9mVpxhf/JPEAbm3X/HdPYTpGKV\nIb0P377EBf88w/km0ngM6dsI0WzxV9hi/6vG7DVLlievPMDaVoMkGCXeTrCdHLODHns6x3lePcr6\nQkBPnqWe80iziLnZ89i0mHe2OTi2xUzlCH+3/DjXj3+Qhxb+Lc9cWeVNh/47Lp7vsdrp4OlpvFsG\nbMX/bJfdjfUe9UIenRTZbDWoFEr4fkQ+iXDjkHarwwOzfe65eYbMatPrVxjEGSiNdCRh3EWaPFGa\noqRDloXESYRlg8bCSjVTxSP8m3//i5SG8i8hy38MKcBgdrT+DQjxPR7k92NcvvIco7Uml8sRxxHa\nxHz0f7yXBz96EWM0SwsLIARJlNE1XUQc88s/cQudjXkqYzNETciMorm0iOcUKYyOo/0Bq+tLTB86\nhSRjeGiCzC5g7AK626JYGCPutLArhtbGGqW8RX8jxdo4Ry+NaLfWSLUmDCMiP8LzFNtbG1hJjEqK\nlCo1HKdAlvqEnTYuoxT0Cn6vQSZsFJrt9jKOregGEWcevY9Tr/shhBrGGLXLZ8IYjNgBI4lsi9Ta\nQAqXK/rXSZp72Df0Ia7qT3CdfBdb5iy2VhTVHrQ8j6DKSPZBuuIqw+l7gCUG4q+ZMIdxzCRxb4NZ\nPsPJwp8zZgok5tXflLxm27C0c4icHsEblMgN93CjAn5nk5XtWR55cpWLi5tsNhtcWeyx2lxlfm6J\njtpiYyPlG889wH956hPMXlmjs1hg4Ykj+M2Ezatt9ozezthQlcHQGe576j+ztr37wRtjCJKYeqXK\n6NAYuZzNdjAgCAZUh6scPzBDwRG0A0POShmpFxgqV7BIUCYh9UP6/jKW9MjiFIGNhURaDiVR5kfv\n+VH+5NO/Sm24hC0tHMt+CQJrvcSzBPjemeT70sReWt/blkGa7oi/KWWhtYIE0qRHmvgkvW1GxqYw\nSQp+iK0kJ+84SdJv0L76BJlwsHIFSnuuwyiXxtxZ2p1txieHwR5i9txZKiMzpP0O0hFoqbA9QZh1\n6fbWKFUEztAQ3aVHCJVDJhVK21SrB7AtD684QqvZIRMeGYYIQT9MMWjSVBGnhjCLiLVFEGpcW7Ky\nuETeK9HamGd6ZAKZKhYXHkXoDCvZ2OUzRI6MPpFcAHWMUjZGX59nkhMcrryDyDzNEfE+ksxhIB5G\nycOIDHriOZTwKOnXU9cfQCvYsmaxs1swQZkrG49j29exr3gDc80/oOk3cLNXR05cs8pSk3Ws8VXC\nlVECUUNVFLX4evxWHj20hGOF7D/wGp567HlK2w4HD1bZDgeEjTZj1VOsz2+SuAH3t/6MoGEYqtS5\nsHCZXD4kLC+QpjZ55dFb3/3jS7kCEk2v3yHLEqYnJ/EHFgsLV7FFjl7ok5cJZ8/Oc+dtM2z2t/H8\nGOGFDPp9Qr+3g3irB1hCYCwH4TjsK97A//4nv0K+kkf+PZIw7OZUfhcrblnWrmR55fd2f6ZRyqLT\nDaiUc2id0E0VtZKFZXnE6Ta97jCW42CCNjMVCyUF5eIY7a1ZwtY60aVZ7FoNckN4pTLYVZqdJtbm\nJpOHD6IdQ+x3KZROkoQh5596lAP7plGuj243aW0vUsiP0vUbbK+ugXZot59FZ4Iw9rGLeYKtJt2N\nNpYfY6wCQ1aZQWcLT1UROYduv0W1WsAPehhScp7H+soG5559iJljN5BvTJDuM2hr90XKWDyIoUTM\nZZzMoS+bLMdnOOJOkugLjOqfppF+hyevfo2Th29AyxGGRJ6WPk+H56hYtxLodTxtKKi92NkUuuZx\nJ/+cxDyOzfup1jo7rWT56tddrlll8WWDQ85JyofKiNIsL1xe4cLFVQZyG4RisFniysUt9o8cpjph\n0cnmGM1uY2z4MB09R/WIoXS0izcyztGbjhOVAuzyJr7foLsdEaeSlUaI6OV22XXzFjLL2Og36G53\nSYIBNx4f4yM/+A5MkjE6UmHPgb1Uaw4PPnaewaCDk/dptbbQSY8kS4kzgUxirJyklCvyCx/+F3zq\ns79OoVzApAn/T49VqZ1K5/v+zuzmpeQBdnXB4HuJlaZ6B6eNodHqE2WKJItod7ok6YA0Dhn0FkBm\nRMk2P/WRk7S3rpBkAdXRg7hSYVUmiaxRgu6AcCCIe5uUXJfC+Bi97U22lq8yefQW/DDjgb/+FFMz\nE/RokkY+WWkc6dl0G6s0NjdxyqNoJwfOKDiKzMTE3RZp2iNyQmSmifsxzdYiJglIs4RBrw1pgiVs\napVRivkynsqYOHQUrCEGWy02rnyH2ac+vyMh+/dWW75AJCPy5gdJxQSNqMP6pqAxeBZPvY6+JRjK\n/yjvvP4/UFFHKKthmsyyFS2Rt16DMR1ypoE2bVx9hC3xDfppG+jgyruRegDWKJlVwBNvfVXfXbNk\nafQW6Edj9LNFVi6mnBgZY7IwzL1vCrj3xA9z500VRiYtulwm8STSLrHqP8daehocQ9IO2LqgyIVT\nJFaHiu0gGOOGo4eZW1ukf0Ui12tEVmuX3SgES1m4SlKsVMjnS3QHGecuXaQdbqLthJLr4lmaoUqB\nbgCXFnxcJ4fM2tiWi2PlSdOMYfUavv63f8n7PnI7SuwIM6hXDNSAXecQ2AHIZlmG7/v4/mDXfOW7\nybSb9Z4Rhyn5nEO3nxCnGf1BRrPdxSQJRg/IogzfX0JHhsPXnyL0u/S7G3R6DVRllNrUUcq1Ayi3\nAIUSQnoM/AYbV85Qqw8xPH6IcPM8T9//SdbXQ7aXn6MzN0egJMGgQdjfZn2rg7EKEAuSMKCUz9Pr\ndhmaPExtaIyh2hSuVaI7COh2GlhphrQUlcoQCE2WasKwg85AkLK63CRr+oyNlLBKJSxVYrC1hXqF\nzFyYNmkNHqat/oYXrv4NmoeoO3mmi79DK3iMAjOgDalp4HEPIj3GyiDP4dzvYictzqVfwJg9XLH+\ngq75CshnGXfu4bL4NDprcVH9Lib6DipZwgr+3xes+G9e/b7g7OKDHKzdxepkRN8kkBi21/aTlS4z\nOfKPePrhP0LU8+QHFkvLUBrSpEFKphZp9hIOHTjCiH2ArWbMSrjOnuoGc+J+9u6dwW3tw7jLDMJ0\nl900i8hZDpPFMZ6/cJFerczm6jZ5b5gbb7wZx3g8feZ5quUcbg5YG2AP5+mHKWmi8fIKiyH+wx/+\nIdfffhhLfReKunvr9cr1ykFjqVQiDEOCIEKIENd1UUrtMDVf+u4OlRgMgl6QUi44ZGj8KERjSKMB\nWqfEfgy6iSUyDD4yXCMJDI4liJMBQob4ica1h8hZBaTMSO0cOrYpuQ7dVovSkEWmAx557Hn2jFVp\nb8xTmj6Av/giqUlQpSmkA2FvlXYosb0hVhpLZFqwvrhAGsd0oxBjQoKwSy7nIK2UQm0fra0N8oUK\nURTT6q5RKEmm9h6g3/NZXZ7F9TTDpRF0rsvI0Almz96369lNi18gyeUhWubmmUMM1NvYN5bjha3P\ncKRyL7FZwZUTCLMXISRnOj/BILuBpLDF/bMfZt+koJ9/Pfn4DSyK+5E6oGA+S92eQRAzxb08EXyS\nW0o/wUY8/6o+vGbJ0hgEzHiHeeLyIwyP1nBKTSjkkINxzrzwFM+NPEd9ah/DToVLjXPUx0tIFVLL\nzbCVzDEc3wSZYTN7jsLB55m5ciNe3uL8kw3IxYxbiq31TY4duhP6T71sVxDR8jV5bXH3qRvYbPpM\nlMYYGR1haWONJAk4fuw60kxy7sIs5XIRO05QUuMol7e+4Wf4uV/9wA6RV363DZwC6tV/7CuWEOKl\naiIQWKQZZGHIdqNBrVqhXNy5z/bdCpSlhjhOae1MeiA1uDkX9IAwaOFZeVITEPk+XrFIomJSS+FZ\nCqMNrqyQmh697gqOWyeNUnBtXDeHqY6RhAG9xjxZuE1vEDExVcIt2oSDiEi5mLiLpXr47W3iNMNI\nBx0u4KkaG4kk7bdJlCHqRziyhGunCCJIBc3GKpkvGJocodcpUCkOkSYaJ19gT6nGUCWP3wlwSjZG\nC/rhgIq/uyPV00+iGCavhkl4CBkcJ3Fd7FweKUpYFMi0YCn9Fs25x1kUPgV3kyQJOTb6br518TF6\nE59gyKuzsnaZ6eHX0ctXcFSd1eQ8m/0NxvVdzLcvEYSNV/XbtTvgO6Mcdm+i7wywUxenFRP0u2zk\nn0MVLAp6hEy0mV/YQEqPNB1Ce3N4jkNeH6I43UUOalTrQ+iNI9jFNRrrPuPjOYwuUc5VmHbHCXKz\nu+wGcUrU79IMMsxmQpRk+HHM/MYq103toZ+EnDu7TKO9RWl4gtpQnv5an6Njt/Er/9vHGd9T3RH2\nRgDmpe6Wehmn/d2WMPAPOlu7Pxc7xGJhSLIdxmYaJzQbPYxWVCoOWSbQWjAIU+IsIe1npDrDEQJl\nCbI0QicJmeciMkMa+TilCh7DxGaTbnsNKT3iMELkHaJOG50DkSvhJhlaDTCDHtXSXvxog6sLK/T9\nEFvkiMM+wrXJuRY6zhP1WghhU3Bz9LptCqUhfKGxRYpXLGIJ2Mp8MDH9RkKmFZvNNlngUyzWiMK9\n5MtVwu0BUZyiVMhmo4HlWPi6iQlHQGhMu4NJd2/DUvVtFD8EHKXFXyELDYLM0EkinufHaW3m8IMN\nBsM2Nwx/gD39H+LhxT9gbWuVuvawxUHq9l1sdzLWFk+zsvgsd9x0jJWVP6XR3cStHGW/V6cdhbTa\nl141Zq9ZsrRbbb7d/ibXHznOUvsqncwFMUyjsUJlpECeMfr9DoFZZJDaVPObKDfFjzukZpvNSx6j\nZZuuVAi7Rb+fcHDvKc43zjDY3iaox+isQBzt/qfYXmuhZIrfCbE1SKUouiVsS7DWamNiw6Ejo9zo\n7SWKU2YXNO9/78f4sZ99L2C/tN363kE8y7KXhoiKneT5hy3h7x7cgZevuLws/ywkWZoRhwFZptEi\npd31yXkujmORJCGDKCHLdkTDtSWoFys89eTfkUQBlpXD6B3Ks7JtEqOIEbSbK7j5PEq54CrSzjZG\nuKTpgKgVYZWK+HEPR1UQ2RbFyjjr24/iJxERPvg9YiMQbg1RKuMJD+XYJEZSVKBtQzF3iF7rKcKw\nT7/rU98zRmttmThrIKICnZbHTTfeSuaUcHQPWahgO/uwjcHWCUmpj53Ps7mikEZjOd7OjW29ezv7\n2MIyb57O0+Ap7PTDfHXhD4hI2Vcao7HlkHaHgVGis02+6P9XBkmTREl62VX2FPazIr7DRvMqI4Vp\nNuw1pLb59vkv4IcNlv1LjOT6LBcU0yM3YKrtV43Za5YsSWaztnkWLQ3jk3V8c4brJz7E8ta3iYKI\nTfs71CcE9XyRwfYayj6GHlRxi12q1iEGehW76pGEDYLEJsugHawQDxR7xyfIV/YTDAa00wu77E6O\n72dx7TxBEOKViniOg3QUaRKjM4lnG+LUwsQ+BTHG7/+nP6Y+UgIh0EYjNCC/NwORUhLHKY1mD6UU\npaKHEII0TYmihCjeEbQoFAo711tgB6FtdqqSQJLpjL4fYFkeRoHlWPRDTU5omu0BrU4fWwmiKCCN\nIHE9Pv1/fpI4DUh1hmsp0sE2iVtB6QC/v02tPkow6BNnfXLuOBvNJpZSCGVjey6DyCLvFugNmnj5\nmPZ2DAgybTPopWgToJwCabFMQUuCtIuTQa/dxxMWGQqtN7Edhd+PyOIBy4tzmFiSc8cxUmFbgqXV\nbfrb58nXxzh88Aib2x2G9x3boSlrF5G1GR0Zwx/sNE+EsAleccC3osN86bk/Zm1rniX/AsdLdzGf\nLbEu5yk5hzkyciNnrjxEUFzhZO0uHjjzRVKZoXJbbHTXKYopNpinVLnEUDrNVX8LpZ/A6JQ47OOM\n1RFpnzAyHJy+B/jy943Za9YN278nZXrfMHl3lE5wmb3ezcytfoNmeAXcImSGJCnR32xw1w0/z7hz\ngnpuGCspYkzGVPFObJMjSPvI/CoTQ0eI1BZ7RgooE7O2OoeTpfivKOmLjQsMFcd2BmNr2yytbNJq\nbtEPAtLMJzUKXIOKx/nwP/lDymPuS4kBSkqM0LsqhRAC21bUa0WUtNlq9tFaY9sK1/OQlstGs8f5\ni7NcXVoniDVGxyTJdwlThgyBUhJp21jSI+fl6bRbnD1/kWZ3pzW8tHCZxuYiQ4U8aytXee7MGcKw\nhzQanaZomadg2Qx6G/jtddY31wjigCAKcGzB+J4pXNuiVBonSwwizVjZWENqidIOSRQTDRKE1DQ7\nEZZbJeo2yLrL+FlIr9FA5asMdICWCe3NFZrr5xBxhOdUGRneSzmvsAuKwF9j0F9nZW2ZS+e/w6k7\nbuP4a15P6geMjtZBDDBBhOu6FGrT2NUR7NwQiAwvVyTv7Q7LlSsvcvriFTqphWhM8a3l8xT1EQZ9\ni55veGzhT5lbO49enuTBS19nyL6RRrdP3LMZNPsMlU+QhT2S9TG6Gy1q/Sq2ifBSw3R4C/G2YGvN\nZ8iuEm2uvmrMXrPKgj2NHrTYTJ5h5thhmtY6tx79MC8sfhPhZfiDKjlh04lHWJj/Owb6MjfX30Mv\nrnJ+6zTpSJPl9QTCgAPeKZr+k6hiBdsBO6sxPZWx3rhAQe+es9SHhghaTe669U5OP3WO7dYW7e0O\nk3unqA0PU86q3PnGn+KGN92GLR3QAmSKMeqlpFG7Wr3f22YJCgWBES5bzR7VcgnPFVRKDsX8FIur\nG/Q6La7GCZPj4+RtiRQpBkjTiCSOQFiUS3n8OEELm6GhUVqdJnHQJ/RbQIEozVhbm8N2LfqDHlI6\nqCxGKZvBoIHf7zDQDhY+Y5MHaPW2ELkcubxHmnqYLEHjQE5S1nlyZY9B0t3ZrtkpmbLpDRLUtMdI\nuYrK5Wn2VjFRRGvhCmVL0zcptfoQSQROscb2peexbYVXHCPqNCnkp+hFTTyrhFPJc+7yBcrpRdoB\n7J3eQ9JrMggD9ux5I/NrF7F0hG3lEN5OBU87uxkpC/1lsDIQw/QrZzlefDNb0TNkSYn25uNIsYfW\n+hJB7tto49FLXsAfuESqgeePsLW9zNqm4OCeLq899uM8+fzT1Ouv5cr653BH2mwPAuruDE/NPYzl\n7XvVkL1mybLZWUWWDflCTDd+Gnswxbn1v2KuN0+OvahaynZLkh+ZRgvJULiPWf8cW+tbdGWDoOXi\nJCn90KLVbJDFglIpZHOuA3mDu9pFxQE9wl12TclFt+DJy88zNVnn1M0H6Wx1GCibvCzwrn/0m0wd\n3odg55qJNgk6zXaEv1NNp9On1erQbjcoFgscOXIUIcxLZxmbUmFnet7zY7RW5DwLJQzFYplB3ydL\nemxvOUxNDyESFz/qMOi2SNIU5WYYnTIYxKQm5ezpR3buh8Uhke9z6ORRZi89wx/9239HFPd3aGI5\nF52l9P0maZphScXS/AbH95VIdUqlVEPlKxC0Gd0zQxK0cFwPmVekTomi47C9tQom4eDMFMnfrXB5\neYs9owWmD1yP43mMqSGaeonK6AhJP8Ip1NlcvYQnBSJ2qI7UaXe32VxeQgcBfhSjRI2O9KExoJ6v\nsO/mt7F8+js0+32GKkWsep3V5iJJqtgzcSOdcAsGNlk6YGjo8C6fBUJTcAr0WhtM1G9krfMM3X5A\nURZxlUMvbSByBUzq0s1CVM9haEzgVQ6RUaTbtjk5cSOHJ49xevlh7jzxTtzCMGl3npuOvYHntv+K\n/nbGePEgdXcG+P7aYdeusiQx5QmX7e2IxnrGaL6NH45Qzk2hQ59+2CMbeFjeAgfH7uWFxv1MD3bU\nowAAIABJREFUVU4QZuep6nFarQ6OPcP/zdybB2l2ned9v3P3e7996X16m+meFTODbQASFEiRlEST\n1EJqM0uOdnmJ7cTlJK5yUq4UlUq5HMd2FFmVKCrLKi1eYkkURQlcRBEECRAgZjDADGaf6el9/fbt\n7svJHz0EMSzBkcNUwW9VV3d9XX1Pf9+57z3v8rzPo2stTD2iE9qcsT8I5Zepj41REM/QCFt0th4W\nHosNFceqILMOm3sNOqMB0+N1pi0Vxz7P9PL8W4l7mibEcUYUJgghsB2DSqVItVpBiHkEkl5/wPra\nNsvHj2HZGopQyDkmihIx8g5ZEy0jpVYy6Q8dBt0hIuvjDko4dkqcqHieh23lUIVKPxiwvnKHwaBF\ntVhh5A1p9vaYmDlLs3Ofr3zlj2k3u6QZGJqJmmXESUgcJyAVTFPlT79wkbP/9ccQWkCq5BGDAcLM\nEUuDQZih5iqkYZ80zmj7fZzcBGHUBsMkiiO6HRgNfFZeu8LYsTlkHGA7k6zfu4G0SoitHUxL0h1G\njM04RJ5P0SyijVWJgi6yt41MIlI/plgucXvzPteu/zM+8aM/S2cU4QYBRUMlDPoUS5PEWZs0SrBz\nKk7xJH3v4QdcRT/B7caXmB97L+2dfQbS44n5j7DXvEgQLjM93YGCid9POVowaSX7SFmmUBqjkpbZ\nHxvxvulPcj/4DPaOyt3y54iiOWRxwNqtr5MbP0U8dp0l6zz32tfe8Zb9y3Adz3Ko+jXOoXzEb0gp\nf/W7lcrTFZOdrW1yhs2RsbOMlUJGXodyco5moUmj8waOYlOwixzsvMxC7SzpoMnQz2MVfAw1QmeE\nWrQYpGuU848ykvuoRQXXHHDr3r/iwtxPklu+AJffeGvdpNhAYmPqZXI1Hy/OeP3WKhX1NP/w13+R\nKArQNf3ww9EMTFNB5tK3wi/XDQ+RxIaCoSuUSgXOnjtJtzfkzt0dTp5cxjRULFNFVTVGXoChWqiK\npFzI4Q40PH/AxtZ9xscnGPS6h1mLchiOfe1rf0itNEujeYft7ZQkzZiaPMagv8Ht21d4/suvEacp\nqqqAqhFlMUkSk8QJtl1A0fJ8/Bf/EYivozNPe2cPRY2Qag2z2KK/3yH0PTQnRxwOMU2TYdwhIcX3\nJLYKo9DD1qewbcnB/ZsUx8cJwxvIzKSYJXhFn3LhOGgtGt0BpXKN9s4uw1GPkd/FG8U4eRtFlxz0\nXOYn86jj43z5xa9xfvEU1vQc/cynSpVgNERkJo5l45TreJ0Qx3q4kdxt9+j5HjMi5PyJD3N99wXu\ndF4lb+YQco9RLDm/cIy9wR5lo8KR5AwH3l2G7S6pmTEReVze+acMdQU9nOdEOo2hVVAtg76d0Onv\nEYYxN/Wb7O39haIPfzlnAWLg70sprwgh8sBlIcSXgZ/nu5DKq88NOel8BLd3g8z2iAINqVg0tRUM\nx6Y2micRJnsbe1hUsNUVDpKQwaCFpdQxzDp5JUdpbJ6OdxenYEJ+B22txO7ObUQpzyv7f4ga1h56\nMwu5D9NQL6GKEJnplHIGk+UKM89+jL3GLll1hlJBoqsmCIiiGCkzVDVDVQ1yOQNQCMKEfi9ACkGx\nYFKrFigVT3Dp0hssnDjJWNHE0DUKjkEYxNg5jbypoxgm6aiL5/UZ9huUa9OIRCNIIobdNoHvsjO6\nhRuPEImgVK2zs32bZnuPz/7h10lSUISKFApJliJkiueNyOeLaLogCTy296/w/gtLbG/cQ6KgRRLP\nXcG971KdnkYxYtLIRcQpPbeHFBFWrkwchSzUbW4cBOwNYo4t1FhauoA/2sONY8w0IdUdcuQ4aG1i\nWDmSZERzex9LScjZAoFBUdE46PQxSzlMTWF1s0su57Ewe4S7jXuYu6tc+L4fwhtK+t0tZheWCSOB\nlaS47j6mWn9oz4Sp8ZjzEby9Dncbt9DqBseVC1zvfANhxJzIPUa/XaUmVeYmfhDdXeF24xq7UZvJ\nbMB6usYSp2i7MScLC6xsrlCbTNkKb+P1brNkf4LdoUteL7DXelhW8T/JWaSU+8D+g59HQohbD5zg\nu5LKSyPJ/KKkMa7hj/ps38545NQz3Lh6mb1en7pxDL0qKKgJUSEhlCoz6iL1sTxeEFJ2aoyUVQQe\naWGfnrcOZYFSfoa8UUYmdYLynzPq5B56P6/deZ7FQpl4IEizkJzUUYo1tu98k1LpCAYWleIRRn6I\nqR926JNUkmUSVU1QhAZCYlsqjl0AHpSJgwTdUHj66cd58WuXiE4soYgEUxXUx8YIfB/dNKlXx4nD\niCRtE4QjothF0TTSKOH61VcfjCxHCN1GdwRX33yVP//iJRTVJAwkqvYAXyZBERIvcLHtPIqioUiD\nH/mFX6J37wqNY+dwDJ1MUfDcmLGpCtVkHM3IEyQuqqajChNv0CTJQkajPmqaUKvriIbP9dshvnuF\nUmtITokpmjliRWNr903qpSrDUZtiaZyCNY41vUxj7Q6qAbpVY3/QIEXlYGcfp1rFshMsfYLXXr/K\ney48Tity6KyuUzm5wO3Pf4PJI3VmT38vQXONyWNn+MYX/uihPTPslKAd4igTkBPEMuZi6yVOjr2f\ngXuRvdYeS0cWEG6dK1tfhkBSKcyzVFhkXf8THtM+gtwziYcv0pr5HD+1/Jv81lf+NpOVZezyx9gc\nrGKKPN3hiOGoCjyMJ/yW/SeVjh/otDwGvMp/XCpv+21/9hdK5T158jQN7S5ZrDGZL7F8wqBQuc7Y\nYsYPv+/HOX5ujnDX45ELU9QqQxzdIAgaqGaHJ5Y+zs7Om9jRCSx7lrJRJcJm87rFvn9AV95i0L9G\na7dMMXtYJjoSr7MTdNGFjiYskiCgPDkGmPQ6DQ7aewCoiiCKJUkqEEJj4B2WVP0wIssEoDzIbRKi\nOMEPE7o9nziBD3zoCb754leRwuTzz/0H3rx6DdM0EUjqJYPxyVmmZk+wsHAWx86TpSGp79FtbtLr\ntgnSgOeff45//r/+Ll/+4msIYeH5PVRVRVUMMqGg6QpJGqFph2KvQghS4MyZMrvtTVY3m5h5h0Kh\nyMTMEYzyGJOnTpNZDlkqGfU6uIGHmq9RrEygChOz6PD8KysMekO8FCx1krKmUq8voWgGUeZTKB/h\n6BMfY3rmHE55FqVYZ/3Om+wO1lnb2aOxdQ9bjcGU5AoVCCWNg5iBf0CxMs5Lr75KxXT5gz/+Lez2\nkBurdzgycZxw6yqN9X3i7g7X+w/nDTmKUNGoH7WJitdw9ArnnVNEW2+SZir5nM/l/d/l9vBL3Om9\nQRA1CL0erXBEY6VOYzSkJ1Z55EINXX2adblJdTFPrIckacipsXNk0iUeCfKjh6mz3m5/6QT/QQj2\nh8Dfk1IOvwO+8Z8sldcdZVTF+2iXP4cRnsG2z3Dx+i2Etcpo9auoikHh7GUu3z3CeMVCcxSG8R7F\n3FHeWPk95s779FY2+MFH/yZ/cv8+JWFh5+s0eveZnjiHsjjESZscbD8sE+3MjBF2r6A0PoRQYjRp\nEWQmqvAZuntYmiCMHyPvmPQH/oNGYnooShqDH4zIUolhKBTzNjnHwjRVhAp+5HPQGlCt5JlfXubm\nrStUx6d57dJLtLstHr/wPeQNSck2yBkamVCJ/CJxFNOK9rizcpfn/uRrOE4Fy7QwNI1UKiRJiKo4\nCJEhkWiaQuhHRFGAkyuQIlFRObZ0mp1797BzdV6/co/3nDtO34sxLZskcLnxjZdJVZ0s8YgCD3oh\nUepTrkzR6rdIfUGxaDHqpAzdEd1gDJo9eoGLDIboqoNqFfnSZ38HwxxiShMnV6ZYHaeYL9B1ezT3\nAuJBiOeGqDkwrSKEAZ2BR8kAL4KrN+9y+txpvvDCcyyeWuLqGy/Q2evTSlqod1RapzR489t7ZlmT\n1DKLbkNn4JrYNZM15S5GPcSRdYRxmjDqM17yyPdzxLFHYubpBLvMH1lm6LdZ696hrxU5WzpP2LqM\n4s6xb7zB+eqz3B98hnJSRp/f4VDI7rtwFiGE/sBRfldK+dkHL39XUnm/989uYTsbqKnk1DNdJs/u\n0I6vcXzB4sblm9RZxIouUFQLTDhrSCej7jxKrp5j6K1TKqpk1SH3Rr/G2dKPMWAbf3qD8cajbIxe\nI1/Okfdn6A0frqzU9HN07TV8EeGYGkqogIjAMAjDHl5UZWdrk1OnTmKZBjIDTdMoagpGLAhChSRN\n8YMI1+2Sy1lMjJWwdQVLN7FVg7YXYhUL6F4fpzrN2to1Lr+2TxBHnD33FPl8CRSFKPbIVEmxMsWf\nv/gnfPPlu9SqMyRxhBeGGJqGADIpURTlAUwmwXcDyCROPodAQUUjI+Vnf/5HuPSVf0d14iQvPfdZ\n/s7f/CR7t25StIuMBm2SIKUwUQfdxKzXkAH0+4KdtSuUnQpdMeLEYoG7u2381OXuQZEnJnqoWgHL\nLqPnbAyZcf7s06TRgFZ7E98N6WyuoSmQxT5CZgRKSmpKgqEgiFyKVo6Ngx72hETXVBp7bWxVst9o\nM390ijff+DrNoE+c6rTPxST3Hg6dr/d/A9Obx08DZsQ8W/1V8qU7hN4cWTii1b+MYoUkpQVc53mS\n4QKxZzMchniDdUqliKkjc1j7R3it/TzT4zMU8+M4/QKXNv4V+u4CN69uEMuAyNv8/+4s4vAI+U3g\nppTyV972q88BPwv8Lw++f/Ztr/9bIcS/4DD8WgYufud1/4d//gh+6yTxxB/iDjJaa1UemalTrU5T\nXfo+Rv0+G3sb6E6MkMvgZ5ycfxI9MzgwNwnabbzyJit7c7x/HhIp2G71GfhrhIlktD7kVKHMwjEN\n3oaN29m6dSgh4ai4vRA9lAReD/QSvjciLIbcWbnMyVPL2LZOGCZkWYSum1hZitAVFFsjtgz80CMM\nUvYbA+oVG4SCYUrqqkG3ISnka+wfbNBq7aGpJnduvEqvd8Ds4jlyuTytzhb3Vq7wO//639DaaRMn\nCYqigQBd05CZJE4i1Ad0SQCDwQBdM7CdHBIVJKgqPPHk+7l1/ZsEgSBPzP7A5803rqKmGZrIqM/N\nIJwCWgidnkfkeXiRxChWOeo4BLFLNIIjkydZGHuRvY5Pa3+PvWKFMStGqDHD4QhNCFK5QZIZGLaB\n5iiMW5PEcUCnr4EeM+wMsRQNaUrCUURQNcCO2Ok2mKpViH2fnjckSAx2tnrEqoc4ajB7/hGi0Soi\nn8DbBNv0LE/OKTNdi9ht7VESMwQ9HSOr0opvMDPxDH5o4XZ3GIgKk/JJPLFKqXqUkA2iQUSSCWJr\nh4pyFn/QwTYz3MEYo3aEKG/zyA+aqMYUUoRc+dJf3MX/y5ws7wP+C+BNIcS3arD/Pd+lVJ6bvobu\njPHKjRrHjx2wMP8sm+4mS4P/mcLRMr/21V8gr4yTmwyozryHvb2MF67/EXMT87xv+Ye5v/UmGga6\nLlmN/pQLj5+n+41xFGkx8ppM1HTuxX/OaO3hp1TePooX9tjoNZkROZRMJ+655Gbr+I0BcT0lUDyu\nXbvFubOnME2DND1M7TRDAyUhSTNyloJt5ggtSX/g0uz6TNVL7LdHWOah6leURKzcf4M0zWgPd2h2\n2kSpy/3VK9y7t8FrF6+RRApxEiMlKIr2wCkEMjsMucSDEyWKIsIwxDQNTNMBJJqikKaSp57+ABee\nWWR35QZ20WHUWAVV4fc/8wJ/42d+nFwpD06Bfr/DqLWL58UokcAu5mj0VjANhySK6HVbRMmI49MF\n7rRaqL2EzR2TacdA2C5qAqGZoSlF3P6QwWBIXrdoeDfQi0UKuTyOM0ni38EdBWi2g7AFSRiQN2x8\nXzLqp4RZggh1EkLE4izylMakVmZ3+AbvO/ZhLjW++NCeVTmFxhyt5jfQs2nKxjF8bZv13Rs4pTH8\nsM2+v4ITW5j5OUL1HiM/4bzusSZsKuVl9pq3QcnRH6yQs6u0k1sUCuM8ff6n+Nrqb2I759gfvUHJ\nnOCd7C9TDXuJdy4EfN9f9KKU8h8D//g/dl3Fr2MXt3hv+efYGHwJOz1K2E+4mvw+3v4aRW0CRy8y\n6m+y0tghIiIUI+xBgaujb1KZHrFknmWhUOPu/Q3u7F9i8UKe9PoBiZ7HiCcYbNxn/sg0rH2bAMHv\ntpB2HzE5htaxCLUBysgmJ/UH/RUPKce4s/oaY+NTTE1U0HWVLDsEP+r6odpXloFuSFQBajlHZ+DS\nHXpMVHN0hwGabSP9gH60zslnz/A//d1fxc4VgeukqSCMRiRJgpIJpEwBAQ/4wR58hiRJQhzHyAdh\nmGFah2BDxUAhJUNBUSRff+k55hd/AtfrMnnkJDdee4l6pchue4ReLHL3/k2iUZ+yXWboD9Eth+rM\nNFmqcPzoMbIAWo0NhJpRKleRep6/4l3myys+q60e9bEyY0GEpipEPUl51sbMSezyFO3dLQy7RuR1\n2ekm1Cd1TMckFjnSwEWXFsNBhF20kJlHiCR1TEJA/Z5xwvEuT57467x0/X/HyTncGN2gmD0CbxMV\nyufnSCMPzTNRGee1O1/k6aWfYLLQBTPD97ewkjlsy2baOcJ26xaGmudm5KNrKlvpKzjlOdQ0TzE3\nQ8dbIdhWKFV9Lt3+I/w4IsptUS8auJ2HoTZvt3eNRf8/bD1BAZ1bO0U6/T6lXIxQJXv3M/KJzuKp\nOarJY1xtfRMtbpMAHz3/j3jx/r+mkC+RMwSDZgPr+H2s3Y/x+JkTfPXNL3Jta521rRvUx2zWN1Oq\nCzGf2fl23vJL1lNI2yNv1ylvmsSxQEsjxs49Q9jzsIt18oVxyAJq5TkeO/0MU1NlpBQPhrYO51Yy\nJEJ+a05FkiSSoRuQszV877AyNnOkxn/129+H4SnohsYf/8p9yuUCgRfhBx4yFaQiASmJowikQvYA\n8p8kyWGuIgSaqqEIFWHo5EwLmaXE8lAfhixlOOqRpQlJIhEiQUHwofc/gzpo8PEfepIJw2Vmfgl9\n8gTS9Wm3tul1M4a9XTQB3UGEXVAomA5BHDAYDtlv79Fsebx444AstfnR90+hZn06niSKR4gElJxC\npT5Fv7NLSc0xEimdZoOKU8UwTXZaHWaqBdaaPQp5m5ErEGZM5YkS8XGPcvAsbfcqrt5lrHiCySM3\nuPW6g25n/PbOt/nefnb8FCkqQdihVDzPUb3K7e7rTEy8B99/gUxZ5PGpD3I//H0iJSTyurhDMLIx\nEjFJ5m9iFsqcOfJhUs/mtbV/jxAJtpDsDVJqjo3QC0TpECX1+d2/v/afF4t+45bF1cY9yrNTzMgl\nZo6uMq//A665m3zj9m/S+NoW5YVNPvrRD7Az2KQWvofRaMBBc5Nms8q55WV6cp2VV9rU7c9zpfMc\nhDmOmT9AcXqSa81VhN+hGEwA3xY0mqufZSd8kTTV0U2HLPNIfUE29NANncjtE9l5bMXB89psNDZJ\nFYWxSh7TUBHi8BQ4ZKg8dBQhFHQdapU8SZJQKlt0XRfTkPzg8qd44fZnCBz4np+e4dqfeWRuQJJk\npElClqZvsU8qyuFpoqrqIa3rgxNF1w+Rz4ZpkcgMVWhAhhSQZskhJkzVkDJACI0MeP6lVxivF6l/\n4SKf+msfY+gVuPvVF7GihChrkyg600eOks8ZTKERuCO6vS6GYTE7UcMs1hHyNg4ufdXjyxd3ObsQ\nM1kukBTGSfw+ritpbGygmzbdLMGwVQwlR8t1cVKQcUQ3TLADFc9MEKUCz/6NZ7l0+yL93YSnH1nA\nj0Fkc1zbfonN2ERLJObw4Qd4pljEMqKcO4qKxoZxkbEjBqrnE3uTBM7rPL/7BmPlHJrRpnVwhFjv\nslw9R6/dRsu9h4H7OruNVxhFA8aMGTbdW4hcgaXaSda7byDcA/L5SQr6BPAwMeO37F1zlhl+Emv5\nCzQ3+yT2KkG7ijj1eT7xvl/lwpNj7DbXeWPjee5cbOGLjJ5ymZutl+h7IY/M1nl95038QQWZKly5\ntkV9XiE/0FEnb/HqlTfIWxUm5DinCt8L7V97a92DwT7rvRbvm3s//WgfM4vx3ISdG6+z8NSHGPQH\naOEAJ1clCQP6nV0qhSqSlNFgQKlYplIqYFkaqsiQUnnbQJdEVVXSLOTY3CRJEtPpeTT7I3AHdDKX\n2rEyB0NJ+iC8ejuCOU1TVFV96zrf+kIeElkcxsJvI+EjoT/qoesqcZJiWTaKphLHEVKqDL2EdpSi\n1kpsX7vIVMUhX5nDsJcwtDztdo/2/ib7zS6FagnFjenGXbIsYRAMEXrA9z6R47OvRXTdmFavSBJ0\nEFmbyeWj2LQYDAWjoYtqOgR9n1QKUPXDAS7NwvVT8rM1ji28l9WlO1y5uUWvqzFUJd+4+g36mk+t\n8gbJQGW59iOsOd/ECx8OhYzafcKdZarFM9y6+hInzpfYarvkCiskIiIczaOqPbb6MZosYeY7VKoB\nL1/5AkdqUxDdY8xcQPbHyfwWPd+jUs6TRhGGTJirz9F1E2TgM1E/wXf0z9+yd22eZU28wurqDXpa\ng4++78fJG+f4jf/zeX7ljz5InG2TRhMMuhYzlVO0OyusH2wjotMcrXyA5tAn7CeouyZjzDA/v0BZ\ne4L8+Dx76+v4Tfj+s59ibO4UL1366kPrutF95qOz7OxtE1QlqhTgxUBI4+5VlDREZhmZSEmVlG5v\nj3ZnH8eyUVSN/eY+d9dWuHVvldure+w1+wy9mINWj8HIJ8skUhqsbu9yf32f+807NEcBe34fuwDF\nCyGP/cwYqAKRfWuADFRFRdf0t5qMuq6/ReOqaSqGqZOmCYo4zHE0JFEUo/DtMQFFPQzLFCSagCTL\nMDD50z/4Y46fPsfyUz+DapRo7m9w995Vbt+9QtgecfrcBZYXTlKYHKderlEbqzJdm0NqCv2Ry6mp\nFFVTuLXbxRV5Iseit7NHbxiTZSZRrCEMSYJJLDLSSOKlEUbOJlNNjPc7RMZdCAVOqY2V05ipTuEz\nInWbbN2L6aVNNja/jjGS5H3noT27e6nCVDZDHO2wcPIM9xsjkA6jfhdbHSMMDkiCCnXtFI2Wx24L\n0sEEi2OLjOlzyLDIvd0tdht36cYRBUXDdwVKOkfX3GNq6jxjxiJ6WeUgfOex4nfNWe6uvkl36BC4\nBp/7xn2uX1/jXO2TfPDUP+BPX32Rl28/RyPc4sXbn0WJltBFAUeOMQj32dlu49gVBsVdjs58GNuq\nI8QW3bV97ILGR578ARTLo5rP8eHv/dRD656af4bS0jTTZxYJRZdE1cmcQ32VqNVAyogsTdFliqkV\nkaQ0mpt4nsvikUkqxTIiEySpT5x4DIZ9eoMBidTYb3e5u7bBsDsiCQWpknJt5wVU64B8Drr9ferB\nLOVcnY/8vROkWoKqqKCaoCmHBBiqinhAGq7r+uHYsqYSJzEIyGSCogoymeAOB4ekFkmGTDOSOOSQ\nOEMlSVPSMMMuK7gdl/GZ0zz/3L9kZ/UaipSMj0/y9BNPUTv7JI3GAdeuvEKr0WDY7xIOMjRhMFdb\n5MSpZY6Ol3CUhExReG3FI40UumlIGmXEMkM3JbELXuKTxBqpBVOLT2N94DiP/rUP0xj5dCv3aDY2\nsLNFUtUiCWyydETRrrA0M0PeHsOtrtEqrHOgbj+0ZzPOPJ20i59ESGvITOkMqoiJE4tmr0uSCVQv\no+FfoW4sog8Fo1FMomfstzcwgpilsXn66j5eH5rWPdRchfxsQKaFhHFKIJpEniS13rkpqX7605/+\n/98T/l/sl3/5lz998mMq/r7AU1ziwQ63+tfohDvcHGxgUuZ+5yrSV5BujZHdZ9jbpRPcxVDGMJKI\nrrrNTG2B9e27uGkfwzTRK3lyWol8JeNg1EYRJrc2v8LV7NvVsMp2B+lkqHoH19eQ+JiuiZNTCdwQ\nb7CPWSpj5ipY9qEQURB7pInkyMw85aJDIgVRmJIm8SF5txTohomhacRRSDfoMVmr8pmv/Q67gzWS\nSOc9j32c+cppRukqgT9OwVT5oZ9/lr66SXpwqHyMqh/OqCgKGaAIQZpINN0gyw7zkkxmSJkxGPQR\ngFAESfIg70GSxIf/k1APiwo151A0aX3tTS489QTFQoWBO6TZaLC7s4vX3qRWrlOrVDhMglLCos2o\nP6DbHqCZGUIzUKVLs5siMtjuhIQjBcsWCJGimpJYVVAUE1VR+LGP/RT1ow4HYZu7vefRwjHcpiRn\nT6EGBZx8DmMYYJYcJvVJ4myfgddjFEqEZ+OjsJL039qzjy5coB20yKROlLZIAsEo28eyTOq1CVRZ\nQcliDtohxtBimFgYSUrXGxKlLoka048Ei+OPk2UJQjqMF5bY3X+NUuFR9jc32OvtEomEilrg0nMr\nfPrTn/7l77xv37WTZXJSpzY/j+cOMecDpufyTB1RiTshJnXMzMEd9ZhQ5rm/chnD1ilqE3hJSqPT\nphn02RlsHU7DaSZBKom0NVZ2V1iTa6x19lnvvc5Tj//CQ+vaizW0SZdePyawWiQlSaa5RCkIVYVU\nYbC+wsjtE/gjhGqiINhvrbK+sYMAJsfK1KrjqKbFIZDyUABH6AaqZiMTwXajyU988Bd4YuqvMDaf\nEGcJt27cpBcpdPrXiOIcly5dYXF5nI/8dIHFD+XJFw6FXIUAU9cPycDzuUNyiwzS7NuVMh7kSWly\nCPJUFAVNVdE1FU07LHUHYUjedlAthYPtEapZ5tbtVXAjji4d48wjZ5haOMHKxiY3N1bodQ6oT81Q\nyBQWFxY5dnKOOBDIVKE+luPUEcgXBFmqs+8mXNuQ3N0SHHSg04TNXY+f+tTfRVFV6k6Ftr/HlLeM\nEkSUJmZxpCSkRxD1GciQ1YNN7kerDLOAWNXJRRMYsowejz20Z63mfZJQAgki8TAth8nS4+SMKdzh\nPkGyT9kqUC/lyU/Z5J2AkIwo0qhZs7R8DRH36LdvcPL0R/g7P/6riNSloiwTpx082UZJdJR4SC9+\nZ8KKd81ZxoxZBsF1/tYzv8HUgkrSPcKpR54isrvstF4lbKtoRgHRC1ksn8eOZhh0fJZJdjZ9AAAg\nAElEQVQmHmF8aoF8ush47mlOOp/EJyNOQwyxgF7Lg5xCN0FEOV5f+/pD63YPIrKwycbwLtEoYxSb\nqI4g6x/C3ZUMktEA32+gagmFfBHdcBCZZGXtdVqdAbqmUi5a1MoVLKeEqqkM+gMUBFYxh5WroysW\nvZ7Lz33iv8WMF7j/+k3svEMw6qIrGnlS7GkLPy4jZt/L6fPHeebHjrP0lIKu6mRJTOAHkCWo32Je\negCxG/QHSClxXZ8gCA5PHU15SN9FUQ5hOrEmCVxJpAj+/W//XxxfGKc0McHd27e58cYlGnsbnD6+\nyJljJynWJrlx+yadzgFXLr+MpTssLR9HNyws3WHu6CTzEwpLMwq13GEZfeDHNDoGH//4j/FLP/PT\n+L5LvlhFhiE1p4Q1Pc2wKuj4XRLHwC+0CYMRtqOikNLZG7A3CLBEHj/JUZ6cYq76cGPQyzSOz55h\nNlckkXn0Xpco6hKm+yixRRKOGMYjHCcPWR89Z1Mo1hnTbdxkjVlHJ18p8lT9B7l++6vs7N1HWA5C\ngBL6mE4Vq2IThhVQHoZHvd3eNWe5fWufEwsZN5r/gvHOswTudS6/3OZUcYkb99sYFR0tneK+cZXJ\n6nFi2SequTSaW2S9BpYaEvoW7fyblFUD6SrsDtZwrAKj/hBF75GbsFDVh8uQwcinPyjimBn5fEyx\nlBCUVWTcRmYpMk1QNRV3fQMvcPGiAEW1QBH03Q43V27gegmWIchbFoVcDlUz0TQFXdEYz1tMFfMs\nHpnkzPFpxisW/+Mnf50js1WU4j2OTj/DsYWjeMUB66tXqNUOaDe+yHb/Mm6pifl4xLFzk/RHfapj\nddI0xg8PHUJISb/XI44C4viweSmEeIvN8lDl+LByZug6Qkp6bR+ZqoSeT7tlsdPtc/3aZY4tzXPm\niQu4gcsrF1/ipVcu4oYpj54/i6ZPkGSCz33+9+m095k7MkMcB8xPLZFXJWdOTfH4iTIXTmt8+D2z\nfPKjFwiHbUq5MnGikiFRE+jFQ1baN6lIB2GkjKI9clmdKJUMUp+aalKqOGRZxn77gJxQ6Q0auOnD\nw19aaY37O1/lWusqcQCOXSdIDyjGOaLEJ0gUerFHueAgFQVTN5DhPqkZks9NUCjNY6ZVXu5/BS1L\nuLP1MkF3k1hNMa0lKnEZGQUcnztB7KbveM++e5ITQuP1iwGXNod85eptppOT1IrTRLLGE8dmyUU2\ncZJSGK+z1r1Eq9MjjV3WvStkWg5ZsFjb+z3uNJ6nud1ga2OEIwqsbtxiZnqOLJgh7Vbph288tK5d\nGTEpZqmq7+N7Hvll4iRPeeJxYkPFtIpEUiWMU+J2m26nSRwEaJqOpucQaUa322Rjewuhaji2jmMq\n5KwcmqrRdftEiSRXEFi2ymAUEfuCybFx9sI2waBAEnXZGLwOmssjJz5EEh2jXD5GouZRlJiptbNk\nYsD4xDjeaEgcR+iqgmGYNBoNPN8/nF3RVGzbxrattxj54TDtUBSFJEnIlxwOItDVjOEoRWgmn/nM\n8yyfXGZ3P+TenSuousOnfuJv86GPfj/D3oAbb97gyEKJRx/7AItT86xcv4WQOsePP0acpkxMVFia\nO8nCkTmW544yOV6lVpvCzhUYDQaYImLUP2DH6+EOR5QUm0pax5DbOF6JtBlx5GiesjmHMDVUAabI\nMVOdodXyyPQBneHDuNtRSyfUJAXdRrF99pUmMi2w7naJkGhqCTXUaA5bSGMCx5xFVefwo4xCOcda\n7zKRNmToSTIz4erWn7Hp7hD7JfY6d2hFe/hZjN9LUJJ3lpx415zFDRtM1D6GruqQzdDTFNaa17h8\n51UGA5W23kLLp/SGKd6oiG84EPlMGjMUx2cI3B7YOXLJoyTTTU49fo7pqWc4/+j7GcQJhaJGbfII\njvMwW8f2tke37SNFk+dv/FNif0Qru4Yw8sRhH0dAFAa4+LibN+n0t0llgmmU0Kwcfthia/cOW9tN\nFEVSyBuUSxY528E2bFwv4aAXsdv16QcZOyOPTjdif28dTTiUrFPY8VG2bm4R7W7RvrPB+vBN6nod\npaOwublCfx9kEpK3JGmS0W53abWah+EVD0SOkhSQCCVDKDzo8wgURcPOOUxPVpiqFpEyYNsLKeqS\nxPNA5ljb6KHLNqfPPcFw1Off/t4/YW/1Fkvzs0wtLXH18hXiJODCe3+A/NQMuzvr5PNVOns7LBw7\nhyIkan6MfK7MeH0OS9MpFk3iICKTCbpWYG+zg9/XUKwyB/oqH3j2b7Fv3WLHSxEsEUW76Gad8dxR\nirk8e26bybqCFRZQ9YfpqzIjo6xUUK0q5eJ70AwNQ6ZUK2OUMou8opBzFIp6jTFTMowO6It1kkhn\nd3uXxHNoNYZsdTfpuBEyMcjncogkwg8CTNVECWz2Rtu4UfiO9+y7pylpzJGqgnrhLNLZQtR91PwO\nxVJCqISobgnLddjcv8/skUdQZEAipsm0GnvDVQwRoYk8abBOIVkmHG4RxCNkqiM6LTrBTW7sfB1D\ne1h9dvl0meJUnvmJZzH6dU7NfoSdRhM7PSSAcAVomUQmksHODlIkZEGIamjUxmYx9Qqd3j73NlZo\ntH1AJWfq1CsO1byBY0LeVMhpgnrBZK5iUyvanHcWmKguE7SvQ+YyKSbYujiicz1k9EeTtL+mMrpa\nwOvm2N/YQdU08pUaCwtljs5Pc+rUMmP1Oo7jYJomjuOQyzmUSxXOn3ucj3/8h/mrP/lJTh+fZqKk\n41g6WZZiaypuZrCJilItoZWK/Mmrt6Ho8MJXvsTpk4/wAz/+c+w0m2zvrRB5Q+YXT3P7xiU6gwbn\nH3mMKAlQhIJpOJSsCmkQoYRtUmEeEm0okCURhVoOqWcMRx0u37+Plgo6+7vkyxF/+sKvk9PzTCwG\nWHqKnkvwYpe5hSf5vqd/joruoRZTeqMheuXhGSQ5LNDqDej6N1nf+SyaTCg4OUhG9JIuqd4iLkZ4\nfsz9zhZmajDYVzFsUIwytawOsUbOTAgin74Y4bZM9qMmo7bH9vYBumqgiyIRfd7J3jVs2H/3Bx/A\nl31GgxGOlkfJeWzvbpO5JY6I41BIqYTHuTN8jqhQZhg2UPolVEMwO1Gjak3T8Ty8sEMgdxivn8dL\nNkkaDk5hATHRQulbJPKA39q7+tbaP1V7LyO3RzCIOPvID7G/cYsoSDk29OgPYiZNGzdI8VIfRxg4\nc8tMH32S8emj5OwShm3Q2ttAypS5+XPMTkwzMVbiW531b4Eev8WGL5FEcUi31+OFF7/M7//OP8EN\ndQxbIUkdOo0WrbaLZWmAThzHaLpKlhzCaBQlY3xsktn5BQq5PLliDlWVZMEId9jjxIknCFOdg4Nd\nvvn1z6MqKnF2iEj2fB9F11BkilWwqVfHyTKXJMwQMuEf/uIneP3N6xi6z9zCozQ7fbZW3qBaX+LI\n8lEuvvw8z154Pyvbq6R+iGHpYNrs3N8gNQU5p8Ls7DwyjIl0QbUyTjxqYtoFvti9hFE2SbJNdDGD\n1NfZHzbwXJsxZ5w4g17YpmBmjDvnGAx2iXUdL9hBNQT/99u0QD8k5yiYefrBgJwTYWRlTOeQfUdR\nLHp+i7o9hudG9NMUJVUp6xNg7qNmOUJXodnvUhorkzegrzTAs0kCh5rQMI0i7YMtSgtnqegl/o//\n5t/8hdiwd1GfZYfhEKqDY6w2twkCk4J9FA2HgXFAagd07HVUS6WmqeS8OWYni0zMtQiaJvvKCnop\nIicEpco4w9EBBClG3kUf3yXqtFHsffY63/GkUELyap3p+WNELZdOcpepwgSN0gi3NcCPJfm8iSZ0\nvDjBu3eLBJe9/Xvcvv0C66vXsAoFVDNHf9hgY3+Di1eu0+4NHpCEZ29BWADSKMP3JOO1On/1h36C\n0oxFkiRsrnfYW9/B91NmZqZBGtSqNZaPz7OwOMejj52hXq9w6vRxxsZruKMGJB7BoEu7ecDa/TWC\nMGKvvUersQJKQhAfQvrTKMb1fYIoQQgV3SrieiloAlVoaLqBapT4jc98geOLY0zOHOf1l79K0bE4\nee4x9hu30byIpx55jOtXvs50vU5/2GFseo6oNwIb8pZNmibYhkWS6OhJhjfsoGgGa+5VjFIb12+x\nP3TZ2HqJJBiSpQVMRWF/eEDf3mSyWkNm46jSY7z2GIgEVeQxlfJDWzbuTJClPkGYQVQmTB3ioEC7\nEyPCMopUGR97hkrtMd73+I9SyKvgNInckDBpUp2vs3xkGpF5KHqI35MYoojBEN9M6ZWuE01EqPaA\ndrzBO9m75izRQZnyaBlbNZAdgS4gzbqM4iF6JgnDhGHWxDDHWet2sTMdXUq8XpHceJucmifZGxCE\nKVkakiURjX2fRG2RV/O4oz4HLQ7HZ99mSpZD1fNk/YBucpO//ol/Se1knfLUNE7BxI1GZFKlbFvo\njkqkKXR29w6hJ0qeTnuNve17xOGA9sEOMpEkImFjd587azu0+z5BGBAnMWmagAbFgomiKKDaqO5R\ndvd8JsbGOXbsCONTk9TGbZ548gyTsxNYtoOQEAUJY/U6uuoQBB7N/R22G5usr91n/fYKrh/S8zJG\nYcbli69y8ZWvYqgKvdGQgReQaAroCoqmk8vbWIpEM0FqJqphEsmYbmDwtaurRN0+j33wg9y6/gqa\nCovLx7j4zT+jPDFBbJXw/YR8LoeFpNnewTELKDI9RAmkLkk2QNEcknSIDAJ6dhFbP0+m9Bh2++Qr\nc/jo1EdHOGedJpUCK5giDhyMNGKvvc7d7VcY+CFCzZGJh6thPXdEkhpYjoIY2liZgdSLTFbOINQE\n1IydwRfotNZo965BWCYbKbSTIW5qsj14nbS0RaVq449s4q7DzvY+ouCQmCqPzH4/jy18FNXXycz/\nDEvHO94228nXkEGEn7p0DjKScJHKxDyeMBCBySBQsZQJlufHWTy7RJDLKBbzDHE46AzIxh08J0KE\n02hOgmYWGI0s+n2PqfFzOLqPkzyMM7JUC5KQ4pjNcABf/eb/xjcv/jukG5DPG7TaHq1WA8NQKD3Q\nPTm4cxHXO0AzVUynzqh/wN7ODQb9HVIy4jghCAZ0Bw22d7ZZ2Txga6/DVnPEdtNjrx+wN/BxvZiZ\nxRPk8wZSpMRZShSMcPse7eYGWdSnWimgqpJRv0m7u8XQawKSUrmM1x/Q8/q4WUSQ+WxvrPLGyy+R\nK0+QhpJEs+l2Pbwow1BtklRlMPAYjfqYeYPF5RP4YYCqWRxdmuP4sWVk+QT5qSm2793j1Jmz3L70\nGscXH0MW8ly9dpHHzp7l/v3rFOsT9F2fWCYszE2BDkk4JAkCdEulVDZRM4tY9bi9d5fAE2jWGZYW\nH0fJ5lk76OJaQy4OGtTcWXJ9g6w9YHDgYxQex4wCilqJwI0IvqMvOFkfJ4g9HCHQp8GeKTNZtRir\nT3EQ7hC2FLptqJVMVAw+dOETzC++l/nSLI5tY3aPkA4sVoMbHF0qcvr0JI+dWkaVQ0yrw+3bL1M2\nJjl9/CNE8QrvZO9eNazpIMMlbvgb7K6ptNubZP1d1NEOC7llEplS0BxSPcKPbfZ27+C5McGwTBZE\nFCKLZKBSMKqkRkyWuUzVLAr5HF5/j8BtESUC8R1inpocYldjOv46v/Bfxpw4s81MTjDwRiQyJklS\nuiOP3YM+ikiplsYwkhivfUCaJZTsCk55CjQD7/9h7k1jZEvP+77f2beqU2tXdXf1dm/f/d7ZF24a\nciiKEkWJWqw4VgAnthEkiODEgBwEcQIkgRPBCUIHsZEgtgRHgmTDlkiJWhgqpEdch5x9vXfuvvTe\n1bUvZ9/z4Y7nTjOkJFsJxs+nrlMH9bx93vc5z/s+y/+fROzceRvfGeE4DppmIssqRZExm4zRVI16\nxUQWRIRMRlIlLpy7gKmZ+GGE4/ikaYLrRYwnIeP+EXs7twncCVPHIQ0ihEIgiyOiNKUoBKIoxHM9\nZjOH2cRh//CQ/f0tBqMxN+7cJUwzTFMjjhN0TaLdMhjP5mRRRpGkPPX0U1iajIFynyg2jPjezRlR\nDkfdHk88+2m+840/4EMPPcqNN1+jiCKmgzFrS0tcufomgiwhaiprKyeRZQFR1hnP5gwHQ4J5xG/f\n+xZIAePxPQazl9juv4Wfb7NpPErfO0SVIuJawogeeSZTWvQRnVeJSgKitoOT3iMvHX+795w7yLJJ\nUOjEbp/5rE9/NuVg9D0WqjJqWbgffBB1pDznq1f+RwaDN8kVF0118dxD4qTF2aVN7u7v4Rczlmqb\nSPqIVfsjlKwGotblha1fR1aPv1zfLx+YsZw8r3CxdR5Z9TjdWqdtPQxah0iQmeQDMhTCbEYY+5RS\nBVOvsVJeRRF0KlIdVW3js0dWxHjTiMnAYBpMGMUpltqiP5mgFxLN9nGXfm9vh+WOCLLAP/3nI964\nYZCY59kw2yALtNtlesOM/nzEwWCEmCeU7TLTq9eZxzOCxKdut5GpIcgiTjBFUkzm7gjPGXPr5muM\nR2NSsWAy6nP77j3COKEg5e7BLoPphCANkJAJgpg0jqjVNFRNJEkh8V3SNKZUtpBkiWA+JopnRIFH\nEAdQiAwHc0REkCQkReeoP2UeZJw+scTKWhvTNAjjOQu2cr9+qmJycDDDmcwQZYHN8xfJBJ2555Ij\nQZbSKzrozRMMBiPqCx1cb8rp85dwZnPUqkHvaIDnBbQbDVI/QJJUJE2jEFLCIMJQ60ylA8RqmbOb\nj1Bfl4nSgBP1Cwh5xDtHryPHbYaHcybTEVEMO9kQ6dwOwvkdzKUtKM0om8uUxeOt4KP9gCSdEg7n\n9HoZO4d7jNw9vHTKYCBilJYxNYOD/DaallIvbVIIAUNvBhQUpWVEISY5aqNrDRAnXJ39MZqxzOFk\nhyCDq7tvousBjpfww+QD62cRpRp3/JcRqxZVXUDKCubuXWoLNmMnRJEUNKWMF7uQRgSyi+ZLGIXJ\nLBfebY6yaRglImmIoqjkoYIiO0wEB93IaNun2cuuHtN74rEWr127ysnGh5llQyLP55mnQra+oSJm\nFkYtx9BD7DLMnAzPP2S93aZSNggOtpHPrpBlBe3OBtNxlzSM8Ocj0tAhzWV002I8ustgqlAp1Uiz\nkK3bL7PYOUsaxfzeF38dVZTJsphavUQSZziOz3zmY5YM9GqTMJghCQqiqJGGKRI6SRLjOSEIAkka\n44chiqIwnc6RZAnb1qiWbQaDEXalxHgCe4czZHmKJMnIWsLB7g6t1jKFEdNeW8Jz6kwHPdSKQRFH\n3OkpWNmER8+e4Zvf+zJPPfoh3nzjVTbOnGc47ZHj059MyGUDQx5DmlKQsbK0ztjd4orpoxfw8htv\ns9yyMJ02d7zbqLJJvVVBTIaonoollHAjD0lOOXz+ETJjj6WVElXzQyys5XR3jlcdi4WMrtY5/+Eh\nYqBxszvEKgUkvool6uiBQbW2SEU5jT0ycJUpAzGgaS4RxTZnWw360wGWWhA4Ad00ZkNd4IAAs5hS\nz20G+RFqIbNUOgscD12/N47/v4zhzxLP2SeKXbRYQBFiMrWPalSIMos0H+BmfYokgkxGNGKsosUk\nGNPNjxCShETYp1au08tdOo1NdM1D0HTK6hpxGpOHGX6xjx58X0bWNdAEm5l7QJ5GWJrI5VttrI6I\nBAiZxuaawaAXUK1KKFKJOwddZr5HdLTFYLSFG84QEZEkDUVTmTo9BFGgP7nH0uoZskRAEmJC32dl\n6SKybNDv3eOLX/o1ZEWiVqujKDq+fz8ca1kW1WoVVRZwZiP8mU8Y+SQpyJpMlEUg5MiqREFOrV5l\nOg44OOxjmBqaJlKp3TeoRt0iie/nRWr1CrKqMRjPmE5zxmOPtEiIg5TAm2JXNFZOrlOurRMGKUmW\nkuuneasvcHblJJdv3GRhuYMsqeztb1EpWzTsEkLqEycRlVIZdzpn7nS5Zh2Sa/cQJZdzJzbwY4iF\nApkKA6+PHwc4aYGiqYzmLpNhgIWDO/Fpc4Y8LHNh9RmErIwTH49g5rFK3VpnNjAZhBJPXfwZKuZD\nzFwBXavgCQ79Yp+7zl121V3mfkrk50h6A5I5/fwOeX7A0A/pjmYInsHYd0gGY6YTj+5wjJ51UGpr\nBNLxncj75QMzFqNSotqMcaIpfc/hcHLEgqnh9YfEsoYTO4iJxWbnItkoJRRdauoGZlFFs5cQszJe\n4NPULfqTO5h6m4ZZoGguhDqa2SAJGuj68XB5GCest8+jCw3kQqe67LFYb+LMVfIEBEUk16BWqzEa\nO1SrAlbJZrc3wJ8EjC5/l8HRPSbTXUTZICHHcUf0e3uMersMu4dU20uknkOSeExmh1Rba2hGi4P9\nLaQix/ccppMpWZzjuS6zqYvrugRuSp5mGLqOHId4rsvUmTOb+vh+iOPGBAEUucjiYodarUrZtjhx\naonQT3FDn1S8X8pvlRWyOKVkiFw4t8L58x0cxyHLs3dRYyTC0EORFRQtZfPch1hc3sRqmti2TbL0\nDHJlg0LI2drZJVdkkkxEVMvImoChW3i+j58U/Mzf+Hss1jqU0wW2RzET38WwYuRSzsrSKmJeIDka\nC+UznD3/BAurEs2WwcmNn6PaimmfXkDKTvHWva9RYNHoHKecOPFIm9S4zoX1/4x4ntEdjAnzOYu1\nVao0sK0YgQmFOmE6m0IOpm4zGt/EjaaMJ12O5iIHvT1iL8FQTabTiKrSoSXYVNpl3HiIGMT4/zbW\nhi2UL5GHLWS9QRZpCFGJu6MeeaGhJAUNsQPWkJ57mV4eUc3rZFlOFosI8ohMmCOmMwI3JRECZvM5\naXIfh3hRXkWaJiS6S1k4cUyvmGQUrkqqO2ycrOKPbW7ee4lYKdA0HSXNUMkpl3M6zQ12ticsVTU2\nVxaYhS7T0Yj5ndfZ3blJqW4iiSJhENBYXib2HC7f+hrLyydpLV8AQSIJE1rVBufPPkLQHWJXS6iG\nwkLdJs8jXCckz0E3debOnDRKcbyETNbQFBkRFUmGwI/RNBGEhCiJEYQMVRGoVysEXoKQp6RBTjjL\nCZKQ6XiOZhTs7M4hSYkDn4984uMUyX1gDFHQEBEJA5+iSEAdY5R0TENHUQ0KMWdh42EWHv2rnP3k\nL7F64acpNc/eh32axnhZiVNP/DVSa5Hrl1/iEyd+miFbrOklRHyyKKaiVdCUlPXVDdbX2iTJDUbz\n21QX96ktWQiNb6O35ownCWE2I45cJsMdzPR4rkPiCDeUuL33T6mpjzIZenTvjRDkDok0w5AaJJKI\n7D3ExcWn2O0NqZfOY8WrVK02DU6iujIMI0xRZNZ3aQgtvDykMGUOu9exJA0hz9DkH56k/8DOLN3D\ntyg1YqK+TOC6REbKmrRGf5qgLkQEwzFpIGGf36Ej6/R2ZgiaRLNWJQoESgokRchoNEKyLKbuPs7M\np1VeI4i28VOfFVnncHi8TfRoNOQXf/K/41t3fo3ueA+t5fHJD13g+W9cxVbPgiTy+X/41vHBfvv7\nE1XXgOeA/50fLP/TD7x6n3fjxg/87s8jDz90gpXOEpPhDM9zqTWqZEVMxZJIfBlR01Hepe2zNAPP\ni9AMkf444tIjTzLqj7HtjftIMYKAKGmoqoahGghZSqO1xnjSRc5jClFAQAFBobywTnPjPFKao8ky\noiKiKib9gysML4eIhog/8JG8jKous7b8UV6+8RwtQUHApLPYZJy+ytmlMtf3rrAQ2gTmkKPhhP5R\nTrNeUF7bw8+HuL0z9L/PWJyZRq0tYeVrVJtn2L/2R9jNJln0DsNCJtjXyaQIR7rDpdYEDYX93gv0\nUhnb1xB9iVrZQMgy8lRBNiTKzRLzw5xETVG0s3hpQOFpFOG/hcaSlg7uE/nEYCsncFOH2W6KUolQ\nVY2ocYCctdFGzyAg0GiJFGlBb/YW64tPcPfmANXOscrnybIRpxtn2B/1GUcuy+UFCm2LnrtPq7kG\n78M/eOxHfF5/7Vf56I+XeLLzY3z5nRf58hfe4uzmSRREEu+H40Z90GKbFq4zp1K1mHoxaRLiuxmi\npGOUVNx5RGWpxrA/ZDr1KAqFzlKJztpjrG20CLwQ13UptUoA5LmAbhoU5EiyxmyyxeLCWSbTA4RU\nJM1jiD3EPEEXZey6jaGpqKqKKApkE4uNtVMcbHchKvj7f/V7vHr7T3jpytdoaDaL8inONx/hfOcc\n287rvB7/BgumxuFYZtEqgGcoki0m/hbFbY/aZoVCSZDD41uhxdWTRMGAwJ9wsPdttDLMZz6tTKaw\nVZYbZ/CKAf2dLpe3rvDkEx/j1t0dzi4abHdHyH5OXhGxSjaCYdLSq9zbH9NsLIA3p+tMaNQt3Mxn\nubbxQ5//B2YsNWmBQlQ4UV5DbhS43TJPPv5jvLz7VRQ5YlF5iO3dbawFlXqzxNtvvk2nXcNK2oSz\ngJWFk5SMMjvuAaVCYhIb1JZ1wn4ZpaozuanT2mjhZcffUr/4of+W57QvcuNqxulWzCgu+Eu/+BTb\n13pIUk4sqO/d+1/8rR/FkAtyT0M58tmWRYpin7fe8QkKgZ/9659jqb3K6tJZ2otriFmMbdcplytk\nWcZ4PKFebyGJ0K7XGM0TXv7ffgU/GHNwsM8f7t5hs1xDQCDMUwpVI4pzsixjHvuIyFQqJb78x98D\noFI2seqLjIZTbCHFtCp4Xsh04kKWkiYCk9EUP0qp1Croeomf+OzPkKcpCDlJLFOICZpVwS7dbwnI\n84z7bTAZAiKzcMjC6gXC8T5x7JPkGdHsCFUQSWSJkmEgawpFKjId9njiiafo9bv0+wfsb91AGeb8\nWOezqGoZQbZora0jpClPrXyOJ4p/h277Mv/wT/5z5rFPIrzM4voCddEkkUsoSZnWsogvJ+/H2CMW\nZ8RSBJkNZZFFbZkqI5IoRIwrDMY7EBVU6lXcO2WST2wzy1Jsy2CpViItmdTVCkIpZHH1kwwP7nLu\nRMrAcTEqNmc1hVhUaDQMRn8K1vEHdmZ5tPMMFe0Cd7pv0VGaiEsCB7ff4FTtIr357MYAACAASURB\nVNlcxosDSorF/s3XGCWHtC0bMdFwBeh273E42WfujyhnKp4+YnjYI3FVomRIzxlw4tRJSkqNYHic\nGOeLr/wzTq7D+snH+fL1uzx9YolLjTKfeugvkyg57wf8t03tPleknjKpSCyUfGzVomULVASRr/7q\nHyEUCv3hDnleMBj1CFNIYoGFRouNlQ6u41Kp1EizDEVLmc5GZLnC8kKLjy0sMQtCDqYT/DRlMHMZ\nzqZMwgBZUrFtizx+UGdWtRuoQKViYpgqURzgeTMiP0bIQbNUkjRmdaVOtVrlJz79k5BBkufIpsWp\n0+e5eOFhahUTRRHI84wCkCUVCplciEnDOYE3p7Z2EcNewrZbCKJCEs2ZzLoMBockUUy/e4dpmNAd\ndBn09kicEXeuv07ZrLN86nEkYwm1VGM03Ae5YD6aMZ3tU81W+Luf+gL/0ZN/n5XSBgtEzIMQWUyI\nw5THT32Canr8nEmiMt6XGCczKqrJYbpPlMmEWszRbIKeGAjoeKFDeaFG96qJKCVMxh5YPTJ7xD1e\n5igf8Ob4j3j6iU9Trq6iN+c0qzUqRoXqgoAv9/A43h7wfvlTPYsgCDrwbUADVOAPi6L4r/6iFHkA\n37n8JoZZxaxXudm7RpD4FB2VySwhzwJyp0wkGTRWbRjnxPYeUmBgCSaVtYscDW8z8uaUmnWmgz3O\nLT/CYdzDqkuYqsg4uMnplc9RNipw98EZxDrc5MCOKMYull0hCxu47kc59N+hCF00/UEGV1UFoiCj\nP/ARtQpiGiGoEvW6ShSB70WEfoButigKjdOnPsLMPWQ0ifDdOefObmKXy1Dk93Gp4ozSyhLu4R55\nDpfWV3mj20PIFZJMokhTClFAU1UMAwxJQzEfRPMUuaDQZGplBYgYTSIso4xIRF4UNOt1JCEjiBPq\n1QaSaoIAkdtjMtxhKKksn1jBLll4rofn+0RRSKvVQTcMJNFAEnJid4/YtFg5eZEiiYkjnyQLkCQF\npUiZT2f8D3/nl/jEJ3+c1ZVlxrtbZFJKs7wMsch4f4pY+LhTn7nT55tf+W0evvQwoqhSabZQ9Qon\nF87ydO0Zvjr5A2wtI8occtHjudd/A71wjq0VXRF4aOMSbl4wdO/S0J4A5RbekU61ZpGHEdOBT9uW\nsOoS3cMxmRRBtgbJIXlPY3GpgydMMKUO337+1xiJE9YWLxKnOX51gGxqRF6KqSo/1B7+VM9SFEUI\nfLIoikeBh4FPCoLwI9xn+HquKIozwNff/cz3UeR9Bvg/BEH4gToa4jm2RlcJE49O6RTn1p9h+60d\ndKNE7ouIUkrTMoiFMd34No3sYaoLjzMNdeaTKX6WUbYLcmGOopjsc4QolHGCgFHsoKbrBEGX/dnl\nY3rNUyrTocmTm8+S755kcCjwz775T+j1XQI8BPHBw6ooJpaooStVAi9n6OSkQUIeSOhFTjVPmU77\n7Gxf4803vkKWhUiCTanSxEsDrty4xtQJmHkJWQa6obP2iZ8gEyQSUSE3bZ7aWCBWIfBiUinHNk1a\nzRrtpQ1OXrqALD8wFtNWSNKYOMlJCpE8S4nTFN00WO5UCeOQIPbpdYfcuH0NVc2p16qsnXyEi5c+\nyubZx5kMPQY9hyJXsIw6JzbOE0URaZKQJAmSqCDkBbPhNYa9HaxyhYXlDsvLJ2k02tjNZcyyzX/w\nN36Za9eu8rXf+y1EFUqqSpoUBEmE6xzgeH3Gk7u4sy5SEvGtr32JYf8u9y6/hixGTLq7PLX2OU6N\nLqGaCzjZlLKscXLTh1A6NmdKJjGeRwznW8hZmYVghh+FtFptVGWOlEZUVjM+8tRfR/Z1GuYay+Yj\n5KQc7ixTqZ9ClKroxiKasAhljUqnxiQ4JEuPyP2cqSMgZDqJ8P+qzP/zGcu7BvOvynZV7oNSTbhP\nkfeb717/TeDn3v37PYq8oii2uY+b+vQP+l3HvIYkxggFvNj9E/b2X6K+cpobd28RZRLdvT6Hzj0I\ndS6qv8CgP+DW9hVOd9YYhBPONR8jzhzyWCbTp8iNeyiNCRQycljFV7bpjd/mZO24S+92PebjZX7/\ntX/Oq3u3iPwIW4T0tosqVLmx94Bu4OUrO9w6dLlz74DefMSLdzJu9kImQo5fgGiWUCWDi+c/jmGW\n+Zd/8jssLtbYaNe5cOoinaU18kIEEVQFwkzANG0W6ws0220WbBsLjaeX10iSnHqtztKJDrqhoCtQ\niCKdExfeG09cSJi6gTOds7Z6gac/9Gkss8LSQoM4yJiMxoRRztJSg+XFJt/51nMEnkMSTBEKUC2F\nsxcucur0BXSziqKXKCQVXTdQJB1JFEnSiFzIoJCZ9W5zsPsOEmBbFnbZxjA0tm68ThCP6PX2MXQJ\nQxIQM4moMBmOByDKJFmG0+ty/e1XSQudhc4pXrt2DWe6z5XXvk2a5cwdj5//2N9mdM8lPNKYuT5b\nWzIn2z96bM5iF/qTq6xtHFDaeB5PKFgunybzepwoP4xeVoiSmDev/C6xOMZPBBANonCGLBsMopso\ngU2eVMmKPqEoIbsGRabj5H3yskEc7XMw2iKRevww+fPws4jAG8Am8I+KorgqCMKfRpH3fuzLH0iR\nB9DfjlDLJwgmUz7z9H/CwewmiVBQW6wzTxyktEwRxiQHGrNLO0ykIwRJ5dq911CNgL57gKAtEScH\nGO5F4giO5l0unG/Q7U8oxW2EROB69vIxvVKm4xfbJPMRmSLx4v6bxMEm7b0JYhgy2gveu3dz4yxR\nsM9Lr9xm6GWopsltX+eJUxp3J31WNYmdb32Npz72MzQaixRCzne//bucPv0MS0tt8n91BEoKKmUD\nVYHnf/3zrNVaFM4Y10/QpAKdGZ956jw9IcJPY8TUYzxy2d3d56GHzz14mN0+VVul1TnHJz/zE6hy\nnWc//bMomk734BBRyHj9zW/ypd/+EvWSzkd/9Gk2Tz9CGrvEScqkf4/b996itrhJtWKyvHyJbm+P\n5aXzOP6A++jNBVmRIyMgySLz8Tb37uScPPUwsqgQxB67197CiTJmozlSqUTgzKmvnCaPR+QUdPcn\n5LgMvAC1tgBIDAa7ZOMJxfIJQhesRp3u7Tewm4v8yud+g88/93c4GN1jbVHl/37rD6H0YM7SyGS1\nU6VUnZKO2+x6tzCiBjO5QPaOELAwc58s0elt9al1ROY9gXNLD/Gme5kiXIaqgevcIRY0pFzlKD1g\no1HjdjfHEAbU1BZGc5GKYfHDyl3+PJQTOfCoIAgV4GuCIHzy+77/16bIA7jz4gRJm7LeOM1z3/4D\nxHZIyTYhl4j9ELNsIeUm0mbGNL6GKgs0lCojKcQQFfKiQFVsmKQcDfdpbOisN5aJgpT1codm9QyT\n2S3G4WPAA76Pd27s0W4LLFU69HvPo+fLnDNEJF3BUivM5w+iIWmwjZRK1BdbME/ZubfPwkINZwwn\nTtY4uj1icPWQlRe+xMef/cvImcDKqY8AGaPREMuyEeWCqlnmf/2Vv8WaImEYZe51D2iWGxSqSKFI\nqGaJdNqlU11CVMpItU3+0i/9MjN3ynQ+h//5twAw9TrT6YRnP/UQKiUMrYSgSMRRysrGOnmictEb\n8NTnP4ymG4jq/epjQ68Qp3NG0wmS3GaxvYYgq7hxysrGY0zmu1Try6RRgCBk5GlCniWIoowkFviz\nLsOjBcrVEt29A3qHQ771yncwpYzVM+cpGWVqRgm/CGgvrzF3XEbjlKeefIRvffMrBMEYTVIw20uo\nlRoz3+er/9dXWKtIUNgoVp+/9ti/zz94+fOslT9O69wuL+x/+b15qJ9McIU9Ej3m8FULNVIJbZHQ\nSXhHep0Lqx9DLRRGkynECUeHFeqLXV7f3WW53KFdNThyJxj5JjO3iyz12aid42Bwh7XlFgdXdrm1\nPSVOBaS/iGd5b8UXxUwQhK8AT/AXpMgD+PBPXuQg26dtBfTmMfE8phsFrLVPYJhLePEBprzE9d5b\nrDc2KfI5QqnNgjIhDBxSISS6fZ1qq83q+iK+fJOp71AenGQqjUiFGSPHISM4pteu5EwGKe/c/A7l\nkoQkeYjDEFuvkUoCQuXBmWXYH+EIBZuLMisrj1Gv1yEcUKuCmyg0NltsdR0iz+fw8HVq1WUqUpvc\n0lhYXOewe5ff/Hv/NR1JZLW5iqkZJPmEoijQJYnCsFBnJn4UoCoVyGNKZZPA7fG95/6Qcx/9OPX6\nAzjR//K//wdIcYAgyux1D8kQKOIUVRWRFRVRF7n46KfY3n4LRS5DHpELAggJ1XqVpz/x84i6DXlG\nMJ8QRj6pN2d16SJOMKTRXCT0pySpi5AnZFkMSMiShO8NiYqU1771h7xz4zKjwZRHOzK97buoJx5i\nOD+gVm0REFJfWkYr13j71W9QlBd5/fJNTrSWKDcMQGL3yjVOPfJhjvavIGo6ar9EY7nOZx/6LC8e\nfJ0d9/ibfXvvLcZ7Kg9f/PdY/tCLpAdrmLWMG6/MaVQ+yeHV66wt38d/KxYsXDfC62WcWbxElg84\nku/R23c5tXEWw1pAURKc4QQjsUgcidIanHxIYRDPEFOVy8e5lN6TPysa1gTSoiimgiAYwKeBv8tf\nkCIPIEkyFsw2CBIQUyopKErEcDbGO3IgCQkWjtAEiSQFEpXD67eol+rYKxt0+7uIdYmptI+a6giK\nSZQWtJQUtbLKwfwOlZJC4h1H65hNR1TkGiuNNbxojJ2VmQ59YiklDQM+9sgqfO9+biYIMvKsQBZN\nLizFdG8fceH8/bKb+cGMyQSUPOT//MdfpKH/LhdXqxRqjq0ILNpNbKvO6XIZu7JARVcRVRAzjVIp\nZx4EWEmKoshkXopVNpBVlSKJ0ESdw7de47FnfxJVecCeq4oiKApiAUvLKzjTCYKYUCQCiNJ9+nHN\nZPPMo/SODhEECauQgQjp3UrnIp6hm1Vku0JTb6OKMpZt89ofPcdTTz+LXV+iSENysaCIIqLEJ0li\nNMtGAq69fQMvnDKZjRhXF1leKmEZJpqkYFkGmqxRqzQoGRHP+3Nev/wC5578OFfefp6fvfQsV25e\nQ9EEzEaD+cgkLQqmox6iEnPLfYWVxjkWFzp85e0H+THfsdlc7OCOurizDMMc40iXOfXYjyMFEXfE\ndQLFopKOSAqNIBjizAsm1QHlqkIyUlm9YOHLNwh2bcI8ZeP8BUJvjlSZke5JjByV2C0wUPlh8md5\nliXgN989t4jcJ2D9+rt0ef/GFHkAM8ZokYifKhRxiFRpIg5D8iBDEyETVfzJhHazSZAXhLOYhVYH\nUY/pju/SbCyRZhFCPiXVVEInwHQNuqVDFrKIR+pPchReJa6ovN+5xLFGJBTMYpHu4QHuJGFcCJyO\nGoiSjuM/aEPuOSXWLR3HzxjcOORsdZGDywPqG1VEWSWIJtSrZVp2gSmLlC2FcsnALCROLW8g6xaS\nKmKKIlkcE4oKpipR+CmCanJn521yXcRUbZLYwzAtKmaVYB6iCDGlko3nP0CnKRBA1BEIMQWBUBTB\nqCIig5gjFRKCKCArsL6+yf7BLrKsUQgyiqDd92i6hYiAIBRIYoag68yDOUKeMukf0upsoGgqkqSQ\nSxpEMrWaCYrOi//yd5H1MVk/QZFypl6IXbUQ5BxNNXHdMdXKOeLUQRRl9o6OUMvrLJ1UubDwWa69\n8zaHg4yVjomtq0RmA8nP8aN9OuvLtHubvJG/Qvp9iDxnO48wiQ4ocYhea9KUTrO175K0jvAjjc3z\na0wHA6zF82y0T3Kr9xyTHYFGtU2zvsn17I+ICpm6dAJxZRvDTTga3sASA4rdVRRdRQlCNjd+irfv\nvfBvZixFUVwBHv8B18f8BSjyALy5iydJZLFMxVLxZ1NG4wAhDylLFZYWbHKpjFFYaErG6ccu0t3d\nISoc1ior2CsrDEZ7OKHHYL9HbbEGeoYaSIiOzPWDVyktSfje8XLvslliPB1TCCKPP/oZ0qBLf2tK\nkkkoQo71PsyqlXZCf5KhxtCVMiw7Y+/GmCgI6JZMamWNNI1xI4EzG1U2lhaxSiXieUDsu+iKgS5Z\nhLmPLRvohYTcaJGNR8hqQqRriIWMrEBRiFgimLKMoEr4qcDwaEC1WXtvPIokkiKiyvfL8NMshyhB\nknKKIkRWq0h5QR7nhMmQ1ZV1hoMRiqqTpTm6VqArCiCQZgmybKDKGqIgc/7CUxwdbqNbJqVqA0MG\nzSwjIpEWESQ+r77yCkjLGNU+68IJBMchjGNUSUJQBAy9hhv2qIh14iLmiU/+u5zf6PDHX/wn3Brn\njMYei60SFatFJubUGg0KZDRNJcoLPrr+U1zb2mN8cP3YnE3DPnWtSU5M4hlcHbzFytomclIhkX0c\nd48it1CiHu+8cYe5fYR+0mc6OsRKz9EQV5BFn6E3JpMTFMPELFxkpY0o9ZiGYyq6xbX9Fzi1dJ6v\nc/sHrtkPLIN/sbOJXlepVxVkTUEWWpwoneLSIy00WwBLZBxsMRETRCXkYLBDqkaUZBs3jZiOA+4e\nvYASKJRLDdIsQPYEKvYSUcVke9pHdOv81NpfOaY3PKxxdCvA0hVKZoW9w12qikqWRUydMUL+wA0H\nbkrJjLEXM6qtFFUWOPPIEodRTPVd1JSqXSbLYuq2TrlkYmkazXYJpaISiwlTfwAxuEHATAjYvvEG\nXuCTJ3BiZY26baBpKgIFYeATuXOKLEDMEqajQ3hf3L+gQLqPr4+smiiyRsmwEHMJUVSBHIEUUSwQ\nJYi8EUIypChAkUHXSmiagWaYqLqJJEEaJ6iSzJkLj5EkAXHgkyYhQTRn0LvH2D2gKGQOrn2Hxx86\nT3//Bu3FJj/7cz+PboqoCORZgTsakEYhUq4RBA7zoUs2dXHnAd3dOdkspCQJdDrnWTq5wc7r3yWJ\nM3JAFlO84QC5ViHLDsm12rE5M0slEqZIRoOaLbHQEZhO9knFIbm2hxsPSaU7DASf8smTnF9+BmO2\niGKLCPHbHLhDxkWfU/UnCeMpjrpHriQYRpOICbXRAjvdkLIO94O7P1g+MGNxPZeyFqIay+SugtoY\no21uM8j7tNYXySSB08vnsSpDskJCtQQUy0crSSyvPMHtWy/Qys6glUwEI8TwK+glDXeQI4wiFqwF\nMtFnheP93PJil1YzJVEDBs4+mlXhifoqo/GcNBPY6w7fu9fzHHRJoRBylLRAFmSqpoLVLnM0GhJE\nMf48xFZVZDLiPMSfehRJgSpWmPpTHM9j6k3YP9rm9uU3iUUJRZKQhPt4xQe9Lr3+LnEeUYgwno+Y\nzUdEiYsoSSTxgz2koShE/pQij5HFHEFKmTh7TEe3GR3dZDa9S5SFUIgkcUKcBwTzQwxdQxRzJFFF\n0wwkSUZSJBAUCrL7aPyCgqooZEDqB2iagVAIWKKNLBSk8YzV9Qq5qqMIMrPRPv/h3/xlnChgNNhD\n1Qz8rCCKHNRCodZqMx532b5zF1VrMA88nvnML9BYaHPn9W+iSDKSIWHUBaRcINUUZr0udtIhmx4/\nZyaFxzwpcMI7XLt3mSIoY+g6Uz/ED11SIcEym1iqjDINOOptoRsy3qjMIBoj4ZAnOW8evEhDPEc2\nb1CkBp5/QOJr/Mjpv4KVaBS5Taodb2k+tnb+v1n6//pi12sMJofUFA+z7RMk0JtIiLrHvBiTCgnj\nSKZdrfO5p/5TvnL5CxxMB/iegzd4iXKtTuAmCM6EaC5TaGOW7XWW1jMIKuy8MuISNi89/yIsPtA7\niQ+JLZti6DDfucHpSwrRKGZ1uUWU5OjCg2jY8soCfhShipCLIogZSZKxXlc5mInoeUzbLhHHHrmc\n4DhztFRElGrEmo9UqKRxQK4ImHoJ1TSRhYJMgiTP0FSF02cf4o3Lr+IHMxwzJPcjkrxAVUU25ZAg\nnL83Htd37iO/eCN0VUMoBCqVFQLZwXcnBK6PJMzRmgaKYpITM3MHdKQMSbCgSJAwKIBMkBEREWWZ\nuMiQYo9qY5EiFyiAtMixrDIxOa++9FsM7h5ytDehYTWZz3tUDJ3f//0v0JQDPFNl984Wpeoi1skW\nbuFRtRpUyg1EsUDSCj7y7I8y3HuTUX+HhXIHw65BFqDnZYyFGqkoIYkqp0sn8SoZeA9aGWaTHuVy\nmzjKMS2FWX7I3HOpG0t4UUoeRYxKI2yhhiN5GJmCpS6y2EzZ8W6gVktUU5Nqa4KfFawJj9Gf9hDE\nELv6YbrzQ5Y3NxHTiDQ8XmrzfvnAPIu9YNIs15GEGL0ic3DNI5zGaFiEkYMpi6i6RJOTvPbC79Dt\n7rBiXAIhIzncQ40ziiJgzVpntV2nXlrjcDQgUd9g7u3TXAq4Oxoyf9w9rldbZblZRhTq/M3/+H/h\nFI+wfeeIq9f2uHZrj1tbDzzLZOohk5NLAoJ0n7fRcxL8MOVMq44/T+nNXGoljeX6EpKogGZh6DZe\n7OOl3n1OSKlAkWUSx8OPQvI4hTxFTgSW9DZOs8VRJtIdBQyShHmU8XbX4eDOawxGD7YF7ryHLCpQ\nJMRJxHx2ROpOSZI5i8vrbJ55DEHWSIIIQYRCKAickJtXv0WlXkdTDZI8BlFAFIT7lAsUkGTEgUPZ\nriCSI0kyceAhSip2c5H+tSts741Y3FhivVPDiSvoFRtdNxGkCE2UcCZjgvEEQUhI04jB4C5n1ioM\ndo+oVHSm+99h371O5uaYCw1sy0bTqyRCgaCoqKIKooJs1Ij1ybE5S6KEg/0u866H4+locxU9LZDi\nEooscbbzKcQ4ZjCa4Hv7TMMBB8OruN6AhWKBmqgTIODNSmhhDWd6l1rJQ1ZVTGFGlExoim3Kuc2C\ndo4fJh+YZ3nj2ndYbK4TMsNLepx4Ygn3ts3o9oDGioauREiGyu3wKn44w5AVRkEXZ1IQVMus25DN\n4NbgNpIiolgKhlzl7VdVNOGQem0Tq9Zguf0IHD6Ab9VShSDzMSsuv/OF/4bOQGU8i1B1jWrZ5D7N\n6X3ABEuXSLLsPn2DCEQCCTmBL/HjT67ypfk9pllKWSvRWXmY+a13SByHg8glUySikY/cbCAWCkIW\nEuYZRppT0k32Dnus1GMsVWOl2uYAGE2nqG5IGgU88fRFDo+2ScMH/TXbN1+i3FinVi8jSwo5Gvfu\nXeHCQ5/gaO86gTdAN+qIdhsyB000kAyNwJvy1su/T7u5jr6whm3WyESRgpwsChFFGSGLsKs10jAG\nSQJksjzmu//i89RWLrK1/xbbd/rotsaSNWfu+Dz/6mucXzYxSwvU22VSb8545mLJLovrj2LXBOr2\nDeKkwD75LNnWLlonQY4DSvYqummTZhOG3S306gLefMbXul8gzY/XZzXMU1SWmwyHPe50r6DWy0SC\njVZ0mXkOw2KHyNcoGQZpYZEXE5w0x0xF2o0OkpoxDXfIRIEgDEhT8H2ZVkMhElLy6hGmt4Eituh5\nxys+3i8fmGcJu20cxwBpAVmOyHIRR4rRSip+MUdsOkReQqHKGEUbRTQJ4gFilLG6ZiDKOpJo0Fpa\nIZSmqIFCmhRUrWXWao+jhQaz8ZBXXj2eYRKVCgUZgrvCktQgCHxUUjQdKpaGLD8ALEhyUCUFQ1JI\n0pQgz0mSgqzwKdk2z17oIIQZFT1m5/B1mo06K2sn6KydwdRsGq0OVmWBLEop6watxQXMkoGIyrwQ\nuLV1h1RN+ImHPoStKNRME9E2GM4iVk930MSA7tGDyEwS++ze/TrDw1vMBgf0d99E1zIGw7u0V85S\nrq8iIhD5hxBnpHnE4uIZ4ixjNhkwHB5xePc1urtX8b0eCBlFlpEVKXlWYCgGiq6jSCAICUbJJIlc\nTCkkFWUkMSQJpryz5TN1Uxbbq2wd+hzu7zKaDojyjNlwjKLX6Q1vMx+PeerDz/LURz5Lx6iyudyg\nXa+y2G5iKCXEwsSorFOtt+kd9Jj0t/FEBU84vhXyAo/J7Ag37bOhnyMSPBQEyCxSNyWTPfIiQJQD\nzq48jqmWkFSVvJKhaia7o+vkooCXZ6h6TFxIeE7CvcGYoDdHHq5iGRp6GHAye+iHrtkPzLOc+fBZ\nCu0G1eAUE2mZqdvjmYc/xeWjt7DSdYYHt1GEEKYJeWCycmaF7N4Y37jFfJYiiQYnL53hYHeATRN0\nkyzJMeSMWB+ixSaJVkCc328WeFdUeYpmVEmtglZcoi86KDUBTRYwTZn3pVlQ392qBFlCLimkUYCf\nZkiZxmR4l43maf72L6wwC6ekRUKvv4UVqeSyhmTKxKnLyPdo2cuIRYDvR+imTdm2EeYSb+4E1G9c\nR5RVTm9s8OKbN5HzjE6nzGy4S9lcwMsfDN73h4ReSP/eFXxRZf3kk3QPDrn1zps89pGfZnnlBIPp\nASW1jjs/RFFr2LU2rXCN7tE9+qNtLLNEHse4cwPTrlOrbZBGIQUxOSaaaiHLBWmU8C9+/VeZ7l5j\n7gnYZoNqrY5SqvDxT51lGgZ0mhWC7CPsX/0G5Clh1MN3C/qHAoViMNXG1K1FyCJOnz1DFq3QvfcO\nZx7/NFtvfAdvdhm13SYRZa5vv8xjj3+EVmAT5AYE3ff+771792gtL1KqZYySGXEu0DGaXCrVeSUx\ncUOXU50PcTTdY/vadylKUFIrxEnAgfAmiiJyEB1i5ivMsjllvYxVsnHlPuW2TffObWbeIlnukvWP\nV3y8Xz4wYyniEaXSJgNvj6jboN5s8b0bl3nodJP+EZBoNNMmvhxSbvgMt7eo2W3U+Azbw11UMSFr\nCHRv7lI/tUxUDEndBLMhs393ilp20Q5qFHUJ3teiMOunSJbPQnPKfLhIyVCoVmsIooQXupii9mCM\nUgqqgSRAMI0Is4wkEshJePH6kI9eqpKnOnGW0O130R0BsaXfnwxBYfH0Wa5ff51aXUdIYcWuMwvm\nDIddhnGC2unw9eEM9413OPWZnySMX8IPcmIn5VsvbSH498jEB3mimzffwdbLjIsM3VJ58YXf48Tq\nBep1g4O7z7N76xtY5TZ2fRHdtIkCH9kSWFw6S5FDb3iT/4e5N422LD3rODIOBQAAIABJREFU+37v\nnvc+83znuvfWXF1dXV09S2qppdYswBYQjMGATTCBJGAS7ATI4A9OVsCLEGdlxc6y5YWBGAiDBBKD\n0Cy16G71XN01V915OOfcc8+4z56nfChB1wXahuCs9vPtnnPWfd693/3s9xn//4kdEvkBpqmTRA6G\nXkAVMn7ioWpFNE0HKSFOAn7zd36JvNoAXB57cJb+eMKZY4vEQUBeNcktzlKq1Mkpgq0rn6feNFB0\nwcH+NopRQc0Z9LY3aS4tsXXnNWQ5Ra3PsrVzm8LyaaKewZVrLxImYz76wb/PcxufxIkTVPVoNqyx\n0OJgZ8jImzJfvI96mGfYddhYSdC9KSeXH+XVnc9zauHd7HObsXeHXFpEkkyykYJWr5LfPqCmCgZy\nCTkxGEchnpdQMi0sBkyTNmdX3820FnG3b/jPy9tmLKZhIccB2ThGKRnkNZUHH1lkuDtALcVUJ6eQ\ny3D27CMsqnNEgxGymmc63WW4N0SvCNZH11g8uUQ72ORYfgVpUcFQDUq1GGUas5/bA/+opznO2jyq\n3M+2C2U/ZjwOGHnbZKkCEoTjN6vHbpCRV1L82CUJFaIwI8tS/AiGk5ATrbssVa4XgpYndm26OxPS\nrMtCscqkd8BQhKR3rnOidRJJztjtdym4gqmikFoS9cIyu6nJe1aXyZcWyLQD4iBFEQa+4hE45p+u\n541XBzjuPorQibIp73jyPnx7SJjG1GcWCMKAbNAj8EYgaxQLNQq1JnJhnlprGcMwaLc3iGKXZBrg\neS6u51OrtlB0AzuFev0U3nSfoN9hoXAMP/VJpTKqKqOZPofb11h4/AO099aoz5zEMFXOPPggWxtb\n9MZXyZVsjLJBFk/xfA8hhextXEeWdSQJ4ngDTYuYuAGjJCZnqZyrP8pB+zLbzia26FOXVo/s2czs\nIjPVJWQ/5XLvZTZViVp9lsiLEQ2DW9efZW5uhdSJcMZdHEVjXpTpeBsIoZIcxsSpiq85SEHKQRQx\nW9IJAsFkaqOoMoqhcH39s6Rvjd769hnLxrWbnD15P1ahxHg05M60TbpbxJQsDKVPmk/Imavc/MaX\nuW6a6HqEODTwI53CfImZ2v1caqzw0uZLPP3Ix3n56nV+5Klv5f/4vV9kJ3mZueJFrFrMZCLBPQmx\nan2O33v1q9x/4kFGw0OQ5buV8CRF1WSc6E1jyZkmWZzie4IgDvHDFAmFTCSkQuWrb2zx4KkWwouo\nVTSs5hyeHzC1I7qOza4XUbYM+k4Hz/PpeB4aEeuLq8giI5eAYmikCvzyr/0af+dvfQ+f+Df/F7Wy\ngucPUDXtCI6VIkuILEFWYqJU4+tfuUbCVSpFjdXlCpceepBytYrre9hTm/Ggj3awTrlYo1xZoFCZ\nZWH5LIPeJtPxIVHsEY9DJr1NdLNCdXaRYnGOYnmW//2f/H1+9D/7m/zh558lcxyQYkSiMM4Ez335\nd7j0yLsY9ddw+jqrZy/w1IfeyZc/Z7N1u02zYZAKnSQLSXyZVItQhYafhBhGkSyVsQo6OBrf9q7v\nxc0cejsvsx5tI6EwCm4fiaadSUjgD9FbeR6efS+drSuoVkgQWyieilKdgmKw57xBYCnMaUUOD7o0\nKwtseK+waH2E2myf/a0+1WMN6q6JbOSw0SkKjd1gjWyikUoy8tFOmyPythnLfGMORTLwbIlUsshr\nZabDCmPVJcs1GXf6aBUFX8QYsyvIkcvG/hUWZpcI0n2uDUY40R770uv0XnEIwh1+9ndvslqv445X\nORhcxWzMkktMYP1P9RpCZvXUCWy9Qy7VkaIII2dSLhTwnAgaCnC3iS+NAyRVELgpSSQhREqYRNjT\nEFNVQNGJMsHqiXmSOMZxQqa2T+dwwlKrgS5H5BUNVzIZhRELtQJxkmOoW5BGyJICIiGVJGTd5Hf+\n4Pf44R/8IX7+n/0CqtAwlJg0f0+RLE7JmSoZKnIY40g+apKRRhJbaz693ouEQUQipjQbDU6dWqBS\nO0MU2vS7t9nfuUySKmRxQCpLiDSj3z3kk5/6IkvNOVLFZWbGZNAPKRSbbO7sY1kq+cYKzmiXSShR\nn19ESUx6gyG/8hu/y2MXjoGcsraxyXBk401j9oYuKnssLjawchYik9EMDUsr0W7voZSO89jF99Os\n1MlbRcr6DF9d+9fIUpnTKwqaVOXTd96EcFVJEcUcWpBjL32GXOsEXr+DqlrYbHCydpYgjoEGph1R\nmF1C1LfZ2LdJikX2995ALQYYVY00SJj4PbxwjXy2yA5rJLGJO83QFIGn/n+cwf//U7b2tlArdfac\nTZTYJZdrIkyBkvkkesby/BJBkBHYMs3Dm2y5u1TKLQ6DdfQwh2rKROiYUQ0j8TEaDZrWY3S9GwhZ\nIonnkPtFhkGHextJQ5FQnqliKXOIWx3MkoVhaiRJzLBvk8pvvtJ008KdTPH8mDC+231MMkWVBYqc\nkSSC124eUMgk7CAl8D1udjzeebZFq1FEEiqT/hhXyji/tIzDlL5aRs5CNDWHEALfD9DUu+Dcg9jj\nC198hv/qx3+cT/7mL9EZdnEP3szOaVJKjEwUxagylGXj7heKIF8wEXKKkiWochFZEVy/tc8fffEy\nUpIh6ybHFy2CVDAZB8RhhGwaTKdT5FyOC48eY29jk1vbDq2CRrc3QLr8Mk888Tg3rt6mUCiztd9h\nXtEJ0oiDwx6XHjwFZKzfWcNQc5w5dpJuf4coDjD0EuOxR7V5kkZ9BlVRufjI+7l95QpaTuWhh5/m\nzrU/pu3EjNbfYE0dUC2YZLbFzXH/yLMiiwydIgfuNqaskZg2crmMEe3hRgEjp02YxISRg9BVUKqo\nxNRbMVm6RJsdrCjPdOhQmikwERlpKChUqpTEPOvel4lI6O9KHD9u8lby9nFKWgnO9CZKJpA0mfFk\nhFGY8PTZv81sbpFSocHJlsGT7z2H3lQ5sbIACjTlVcxSnrxexh+3KRRXOAwixl2b7eEVwlhG6GWe\nuPA0d25fJVdtHtHbd18nHPext8YYZQMpp5AGPoO+TZpIqPckA6IkIcskBAo5WSKJp/hZjKYJFFng\npQmTVOYPX90hUTNqeZWnT7U4eXwe0zCZTGwiOWHZqqBoQKQySBKsQp0wcBCyjGYYRN4USchoSOwM\nerz44ks8cN9D6ImOpb+ZcEik5C4rsQSyKqNkAsvUMA0DS1XwQ5e+5+M6IcHQQ41Djs3VOXffDKdX\nmnhuTGu+TqViUp2ZIYpkZqotGvkK3nTKqbP34ToJL94cstN32Dnw2bizw+2NPXq9KUsLK1hFCV0T\nlEt1zqye5/jqeZqteUxLQ8nr1OozzM9VqOQsioUC3b11bMe+2/qvakjorPcS/rsf/wHK9RNkVsLC\nw0+hyzn83pRDN2CpeDRmceSESdDjWG2VWIE4zFAzk0HsY/s+A2+AmwrcLKGQ0wgTh2kwQbHKxJMO\ns+SolU+jZTr7nR30SMIJAupeQvvgRSylzInZJVoVg+7mW88xvm3GMkoStpwh1apKlPgYahERmPzB\n5X9B4A4JPIdbB4d0dnbobxbJojKKnuBmU+zpiGm4TyzGZIFMTEYsFfG9LmmWUDZO8dxXfpX6isWx\n2tGK7LHmx7n5Qp84kyjkClRKVdI0YRol7B/0uRcJJ45jwjhBlTLSLIM0Q0klVCFIENwlncoIFIln\nrg1wpYSKZeLaEZPJXQauaBiSM/NkvuDVyZSDyRamnEMvloiiGJEmyIpKkqRIUkaSpNzY6hAh8T1/\n76eZbbx5LOpKhiUpqIqEJEI0U6Wgq+RNHU3TQMgsNerUGzl8UlIRk2UhhVyRfLnE7MIs3dv7FHJ5\nTp44hyoJtvf2OXGsQr55F1Dd1BWqBQvXmzJyXa5u7NCYnUEzYg46u7zy3PMcHHZJ05jZmQZzc7MU\nSg3mjy2z0GphWSppoKHoJoV6jfmlefKFKma1SEZIYTbPKzduUGyucPvW67iex0udZ9GVMkqhiJ4X\nXB8epWOXkAiJubrzMs5YEEQpWqiROh7FtIIXhBz2dsilQzwpYDBeJ8kSstBjfS3k1tYe08kuo8Cm\n34P9fpuKmOPZg2+wN/YJ45jJgYSqGLQW/yPEOs7RIOxKTPYcpFGe2uwIVxlwYfF96NkCmV8AzWbH\nv8a4cRVnkGe3M8SPHVw3YTKJiU2FtfbLFIWFoQ3wcfCcfezJAHXmGM1qiavPfeWI3tm8wXseeQ8l\nU5BELp3uHre2RghPUKoVkNI33Z4w8Ei8DM2QAZkkk+7OqglI0pg4zfCCkHyujFUwuLwFl/t9fHtC\n4Ic4zpRxFnC5vc3Xu+sMdEhFgZu7r5PTmoRywmTSQ8gGiqyQZRpCCJBVXr6xiz/Z5gf/83/8p+t5\n30e+i4c+8BGUggmKjikraIZO0cohqRCOEzRLot2bUimayJlCwRAcDIYUCzlkRWP14iNkWQFDU6gU\nLU4slrmy1sYb9UnTjDhKWGzoqLKBIhTiLGVnb49uxyOWFBAKOcViOvxmEdEdYeR0kiTCDWzmF06R\nr+TRrCoz1Rb5gkVOl5CjECnJSIKYM6UhW5tX6Hkxh77Ns+u/gWNHlAszTD2FvN06smdTzyEOBXEk\nMRhEBJ5MZ3TAwI2YTCNEqJEvyuwNbOxkSGzoBIZNJHnUFqqU6+cYxwoIlZykoOsme902/iDC9AVh\nKiPkiNhIKeb+IyQzGu3YOAcenS2J/NwMm9ddgq0SegaR0qHWkJFESCoVGdw0mY76nFg4z7Ha45Ry\nMgvFUyR2hplV6PTWGU1lJsMuTtckyNoE2ZS93gFzpxaP6H1p9zNYrTqplNE7GJNEGvlSjtJsidC1\nCe+5JXIkwEgJ/ZCEiEyAKmVkGYRRhp8G6JpCzoDZZpkD1+Ny2+UXv3GbQzPP6qMf5Ph7v5WD+QZJ\ndQEtjJG8mMiJ2e/dZqm2DJLG+itXCNxDJF0nl6ugaCqqKvGZr32V2H5zKvuhx57kwx95D9/13T9E\nNVWRFRPNstAU0LOIE8st7MOQmVqNubkVZpZnKZZrGHrEqLuDnMkcm50jXzJRdYtao4mmWeRUhZ31\nDptb+zQad3u+lueLjKYeS80atXKJki5R1HUsRUXPKbRadVQzR+SOyBGjJDH12hymoVHK16nXyhQq\nFnmrRq5aolquYE/bmHLE3PGHaVVP8OADj+DHQ+xwQBbF2Gmb0PZJSkd58mIxoWrWadRWOTF/P05/\nQLcz4MTsI9hOxmE3gLRKFsoE0wktq4Kmu3TtDjQUZkoq8dhBTUKyOM903UXzJCyrSJoX6HZCRa+S\nBIL9wVtzSr5tAf4H3vcevnH5efJWBQYRM+dVcvuLjDmgUjxNt38HO9EJ+wlFU8EOfaadkIExxbRa\n7I7WOdU6TTve4MnHv5evvP6v0YMSrj/AmXSYLx6nbadsT68d0fuJH/sadwbr/OpvPY+ZV4g9yJkK\nSRiiySr5wptBS0SKmmZIlk4yDMiSlDgTiAyEEDQqZSRVp9cbkoiYhZkqpVKe8WBEsWwwu7qIlGR8\n+3s/yB997etEvo5uCZIkYGwPCBNBqT5D5h0ijdqYaoJSXiSjTOj1sSyLX/v9r/OBb65nZrZFHAc8\n9f4znDlznm57F7Wg8blf/ASGlSMRCrf32hhykcPDDjPzM3S6PcqFCqms48U+e7tbaLqOZeaQ0gQv\nUXnnu97BK288jz1M0LQcc60aRveAcNFkPDnk2OIi05HPfrtNo1xHUUooQsOySpTNeYLEpdmaw48C\nJhMb3dLRdAVZgGxpmIZGlvm4k0PizKLWqPDAQw+Rt0yu9r9IEFaQKj08D8qV8t1p1XsIuCqljGGn\nzTSYIjcDtKKBfKjjeBFLxyOCgU7FmKUwJxG5MuEkwrKaCK1JGO0g63lEqYMcNJHrDrpew0l6mPN5\ngm7K1IGk2aVg5cn7JWDnL3xm374Av/IyrYVZonjAhIBwt8Ga/McM9sc4o1v4mY2eVtBUiVargGTZ\nNGYDDKOPlE040VhEnxGcX7rEdGeH0a6g2TK4tHicxco5DttrlIqzKH/mEj/+o+/nf/03P8d89Smy\nVCZJU1wnxo8dqq0q2T1ANWkiiFOJKIjws5Q4TUmTjCSLGTkRThAxnbgUczmqpQoFSyWvp5w8vYhV\nLKPqCbquoUs63/6Rp3jn42cIQxkkECHc3nqVcmEBtbKEkrcoWDV0d8hsMWSm0UI35pCtNwHnpCjF\nVPOkUczcQotz91/iwn2P8uEf+C8xGlXCNCIxijQrLVwv5eBgSKk2Q5il6JqBqRbotjvkcwWQ0ruJ\nAjmld7CLrjcIY5VqCXZ27rDeG1HTJfK5Cp6XYugatbJFuSQzOtwk8BMyoZJrNmg0FtAMg0qxhGXo\nzC2uUixWyeVrWIUq1VIdTShoQqe7t8trz3yNSrXC1trr9EKb5RWLsZuhZAlj5wAtPuqGXbm8Ty/s\n0ygtU81mcekwd3yeer2KpiwRN316o9tYhUXsLCEtHlJSzyAlIfXSKXKlKkvVJYQmaJXmUWsxVmkW\n4jlUtUr1tGBGOYccSeylbd5K3j6avOw4Vr6N8DSoegRBmZPa/cRouOGQJC0gazFq/jTDgwlecY+9\nXpdTi1UiV6NSWIADjaF6i62DCcfPVNi/0kN9YIedGyMS3WChWkaa1iF4M2c/u9zkTGmFvGHQTyFJ\nYvJ5hTTOoSgJ0T1U4KoiSIDAT0njlEjKSCMIfYXFuTKVgsHI8YhTiVK+iKFLOM6Qc2cfRZIF48M+\n1VqDOE7RMpMHzi7RbJX4wjNXCEOXOIvpbF/j+MnzbF7tsjLXQNcL6JUqge1jJetE+TeZyxRFRVIz\nNGymoztY+bME/pSzZ86wevyn+MVf+DnK2ohyVcEe5VDUHImk0ZxbIUqmjMZd5leWyal5fMel1ajR\nHzrEkkbg95mrNqnVFE6fOguqxMvPXqFgGkiKRBoFFEt3U8+ylMMJJxzL51H1ArVGkyCYQuwycn0k\nSUIzi6TElCvzpIqERIQxs0SyN+L9H/44rjLBiW1KosH+zjZeOMJJBZY0y8i5CvcMSx5fvB/f8XH8\ngES4IId48TbHyk+QpQli2gPjLIejayzMnKDdvY288ocsr/4D3FGX3d3LfODB72Pi/t/sDzeoF07D\nZIjjH3LmYsqkJ1Gv1zFllzj9C8GI7j6z/yEN4K8ifuOLLC42MfOC1DlGdjhmsz3ACX0KhfNksgJZ\nHpGs4ycdDLdKkubY3gMn8RiuvcGBc5PedJ/5OYnIVUhLEbbdJ9/MU6nkkKUOLalxRG/oKPSDdW7u\nfIGJbePYU3q9CY4zIk4D4viebIiiEHopaXAXLCL0E8IoxiMkiiL2O0P6tk2zYHHyVI16OeHYsTOU\nFxaZmWuwsLCKqVhowmB4sMakPaCEy7e+8xSaaWLJOo4aEyQuVn0F2cohlw0UoVKdb3Hh/CMcq76J\nNhclE1I8gsTBsuZJkgGqLCNEiKZK/PB/89OsrC6jyypzS4tUanXyxSJxCpZVwXcSIn+KF/kcdNps\n7B6wdzggCzMefexdjIIBge9hD3ps3OpgGBnDUYDtBCAlVGvzyLKKrGsEfsDUnhD4HkLXKNRn0EyL\nLDgkS0N0zaRcbiKpEioZLhEyIfmcoL17E9VQOfC7LOYbSKlOs3qOGWuG2YbJjFg+smeBK2PkU/Rm\nF3wJa1JjnO7iTQ5R5ApxdoJHTn6Aj174KSR9wmrjAvakhqf+NrVyi4+89++gaQ0WazN86J0fQS13\nkWsBjdkqsnM/unaStfaL1Jcep24cLTXcK28fTd7WO9i5EZA72aNqyuSPCc42TpBlCmE0BNtntnic\n+Qs7nHnY5r/40E/wY9/xT6hZi+yNRlz299EzA12uMhjsYacOtdostUqB1ZkGQegyCt6gPT1a4FLr\nE272Nhm7HYQkkSQC3dBoNCtARnYPNHMmMqQsxU8j/CgjTDK8IKVYyDNbr1EsGSw2Zyk3Kzz/3Ovk\njTJnz5+llEJBy4MSo5oRat7DtCoEUUQYQL+9y8fuX0RVIHZ89rZu05w7zuWbW7iRjKxJ5HJVCtVZ\nFk++CT8bu3uk3h5yGkASoWsFFAWk1EPJXHRd8BM/9d/z5Ie+l0ZjlSDwcKcBURQQhinnzl2kUFhm\ne3+LfL7A2VP3USxWmbguSjRivtFACBUzbyG0BMMsYXtDpNil0jiOJJtoZgVETBoLRv0uSRLdpX5Q\nTcqtM1RmT5JJgjgJEQjIQCJAQSdOFBq1BmG6T6u8zGsbX+PFvaucW36E47MnuHT8aWSRY3a+dmTP\nnrz4YTStyvo1G83UCRODbi9k3NkhDDzCrs3a9g52MISkzGg4pO5dZPPOmOv7/4wvXP1lXrF/mTuT\nKd/Y+Ao5Q0IvZ1Qq8/TsCVP/AFWq8sruv6RZeuAtn9m3zVia5TM0LEG2f5wbazeR4hlsX2DmNTLp\nAGvJ4sH7VghufohweB//6ks/B6rPoXiNs40nacgab/TWONjJcN0CFgqZqhJFBju9GD0u4g9nGfWO\nzkbMNCREquFHIAkJWaToRZ1YJERhBNGbMYuUpQhMVEljGsREcUooBI1qhWI+REozNF3HnU5YXVhA\nNku0FpZQrRyIED2y6Xe3cfsOthPSs6fsdyZk2jIdR+W++dPMlmdx45j97g4iV+CNV75K92DA0B4y\n6m+TK76JfivrFonQMa0ailYlFRFZqiOrOSRJhyjEVCIee/RB/vb3/yAf+/jfZX+/y2B8iCQgETKS\nqdAszZKlMlkic+H++9BzMn/45ee5cXMTQwQ0SnXOrMyx1e2CmkfOF0nSAEU1qDVnKBUqqJaF7TlM\n+gOi0CNNM4LUY+XUEywsXWDhxAVqMytoRg61UEXoMgZTkH2On34Xum4Q+kMkzWG7t86d7vMcOmsU\nikUU/Wib/HC6y5Onv58nVr6DtbURzfoFKloNtVgmJzRKtXn2bj3L55/9dW5vvoZWVBk4DjmOU9S/\nG39k0V3vIGcDnJ7E5hWFLI3YWd8kp+kMRmMMpURJW2B7cBRZ5l5524xl96DDerCP4xV58JH3Uq7p\n1Fsy5YKPLreoVYpcG/8xc/kcv/B9nyIvn+JXPvUJgo5FvC8zHM1z7uwFGrUqrdoDCKmE5wRMHQ3f\ncbGTAcVkiaF7lNd8P96ncnoTz4U0zUhUFVVT2N/eJ4ljguBNN0xOIUwC3Dhg6kb4ccZMowZaTCmX\no1iqYU8mlKp1Es3k4jueQlVkUGU0q0ogF7DHY9Y21tjYbRPERXa6ATudHr3emFiWOb40y6OnzjHq\nd8mbJWRzjmvXv8TO2hX63RHuPWPFQWQgpXmmw9cJgl1EqqJrJrpVRcuV0cwyaaCRBSALOHf2DP/0\nF/4FgeOh6hKmamBZRRozM1QaDRRdJXDHXLjvIo89+igPP36WVK4y9lxuXLnFG7f22Gv3mW9WOOzt\nkQkZTU6wCjUKxTKKbOIHAe54Sug7EIOm6xjFMoZZIBMZQiQ404icopBEHnnDJFcoYLsHGFaJvFbG\nymWYWRlJrpJLFmm23nNkz8bhG2z1vsbqSotHTp9DFgMePPs+DHOInzjs9b+OV7/F+FabQlTEcGbw\nE49b1w/Y7FxFN9tEHmjOPFa6SGj5NKKzaJKP6+2gSXkOBlsEkylq/NYo+m9bgN8d7KIGNYQ+4voV\nm9OnBZX6HHsHV5Ezh0b6CAfjW9zZG/CF9U8yzF6joV+k2Mzx4JmLbH3+Gq//do/CCYO5hSrTSUCs\nCHxlSq6Swx54tJN9TlTO8Dqbf6o3GSsYJYXGaZN4NyTNBPXGPJPOhCyDe2GbP/HptT+/8P3Bn//s\nhW++jX7j83/Nu3LvVOfX/9y3QWAjyQm15hPEQRff9hkfbqObORRTQ9IMZDVPiozneOimjpkz+J9+\n9hN88rd+nXJNsLvZR7Nk2lu7JGnE7Ru3ab7rcRx3xPnT9/Hss7/PyvwlvnGrQ9HUqC/nubPTZji0\n0a0d5mbrFIsNxmMbw7QYTw4Z2QXkYZ5mcwHXdZEklYwYKcsoFWcwtCmhN0U2CuTNEopp4vguGztj\nzj90if0715HUgEF6g3xB43BPP3Ld/YOYrLbJaLjNwTTgwxe/lSvXtpByc8y0dMI05XC7zzs+JHhh\na4uH5ucIBjaqLDORbqFkOlIhz6C7hrXYZFaCqlFiZFQYJgNIEgrqMpFYx/Drf+6+/4m8bSfLIEzI\nFQSGWUMWgtvtNlfufI79kUS3M2DcdWlvucjmVX7t0/8nvqMyjbdIwhZfuPwpLp6/nwfffz+t2VmG\n4ykhDtUClPIa1WIdJInUifCCozFLWX6AUaeKNxKESJiygm6k6JlKkgkyVX6LFb/9ctDeRlJV3PE6\nkWRh1ZYwC0VMy8KbjAltm+mggzs9RIlTwtGUwPVQVcHf+I7vZHToksQu+5vb1Mp11jeu89R73ssz\nz32D5WPHuXPrNS49/A6GYZ+PfuBx7r/vNEtWEUUrMNfK8+rVGwzHY0gFxWIJQ8+hCoMwVonjlCAI\ncFwXw9DJUBkfdolJURUFq1ynNnsGLV9DVk0E8OBKg9euf4Yzy8sYJQU39shSlaJ+tJmxoufob+4w\nGYxYWF3kzn6HxpxGx/lNnO4NPDaYXZ7HlWc4uXCKsXOAE3ZRQ5f67Ig410Evb1KUy4zbfUrh/Vzt\nbaNlJZzdKflmgaXFRcrKKuX6X0j6ALyNJ4tILdruITUzY8Immu3h2FXmarMkueu8MvoSTWuOad+i\n0iyRiwW2d8jy3AIdu09vGDPs7xBFLrpUIVdRGQwGtIpz7HWvUtRnOHlilSDJQ/fFNy8471IftkjT\nCjq7lCtFxocj/DRGUzISP+JHvvMsbhhhj2KCSOL1zSHHj8+Ruhn1uVnKJYgDn0a1hVXKoRsFzl68\nhKHrhKGDkHWyNMJ3R9y+fhXNnKfX75C6EaoUcDjwqJZyjH2fRnOGKI4JPZtmuUS+XEJWVVQEqmGB\nJmGPeuxde4OFl58lS12Wj61gUCWIXbIYhGWQKzaJ/RhJAUmSiDxjPB5JAAAgAElEQVQXSVMg9oky\nCUXReOdTH+STv/1v8dOIVEnQlByKFvHep9/Nxq3bPPbwJXbbO4x6MY2GxurSaW5tryMUn5JIWWgV\n+fwXv8F3f3uTcrl+t6PBsBj3Dzhx6n4cd0Qap6iKRBxBe2uHSmMeNJN8zsLzIlRZRlU1PN9lEPVp\nmAVu9F7Fnrgst04wDhIOvKMnuq1MWHiwybU7t6hPTyCVNznYn1LVLzBJRkiSjp4OcdMpZ5ae5lb7\nGbLEo7WSZ9+HQjEj9uvMHXuMctohVLvU4jxmaY6HS/OsuZ9lbTxhFB4yL/5COiHgL3myCCFkIcSr\nQojPfPPvqhDi80KIW0KIzwkhyvf89qeFELeFEDeEEB98y39qrnOucZHRZErYMyjlmjxUfxfhOEQW\nM+TFBRxHA6FgJxFGnKGHOhvrL9I9cNlr3yTLIrI4T5w4SGKEpcv0YgknVBnt2Ly88Qyj3lEC1mAU\nEVWm5MMUPWcgawqBF0AmkcUykqQiDIiijMlEMJpG+FFE0cxx7sIqx1smhiJYXDqNls+xfmed0xcu\nYhgqiLtEp5qq3IUJkhTK5TyBs01Bz6jUdPwYcgWdQJJRZZ3Qm5BENoVynmK9RqGcw9BVhCKBfLcI\nmjOrPPT0x+i6Cb2dIZLeRC42kCSBQBB4EzISNE1B1w1UWUNSZNLQJQ49kuCQJBzSaMzz3d/z4zz0\n0PuZjiYstgx29u5APGbqDnnx+hXGfZ/m6nkmfobtHnDpwuOcWFxGs6rsHjho1QqvXnmVw2Eby8xj\nGHl0XWN36yamkSOXLzO1D0mJqC8ts715m2KxiKoa6LpCHKdMxhOSNGbsg6oeRw8tFE1jGB7gDXpE\n6dGxYuISowODZvkEA/UyyVBGlCu40QE9f5f7C6cICx6arrM1/Tx6Bkqg4CcRgadx0J+hH6yRytt0\nRq8zjvex8hqp0uF25zZeUCTrRRhpC2f6Z3T/VY0F+AfcBfv+E4f+r02TF+1U2Ixvkq9GLFyyMcIG\nS6USj156nGK+iZJOKEcZUqqxN9xEmIKCdZJyfYm0NyFRfISc4Rl72KGHKRaJwzyxewfF8Lj42Dtp\nqE1GfyZ1vLnfJetX2WIPPwzZ2+8S+C6aKSFnMqYmoco6kiRRsMAjZXlxgXMPPMDcwiyuSGnNniRf\nyqMIiXe///0YlowiII5DRBYwneziuUPi2KdcMFleXqRWUQj8A4p5iXK+yWhkk0Q+kTNAxaNRraCo\ngixN7k4lSgpEGVkUE6UK+VyZ+y69gyvX3+D111/AH/Zwh13ccYd4aoMX3MXlkO8yD4NAZDq6USOZ\nuCiBQNVk6i2TD330o/zIT/yPpHqZF166yWsvvEy5tsoHP/aD/MP/5V/yQz/8k/zgj/40enmGJ979\nBKZikGUpy6tnef/jD9OoNrh96zUGg12yMLgLXZTEbN54DUnKCKYhILByeZ7/6pe4c+My9nSC4G46\n3pmO8V0XrC4LK1VGkktB1HF6EZKisNI6mr619F0maY/haMjE9gnUMaprkBM5Us1i3dsnHtoMxxvY\n9h2m7hrN1iJq0CJLFApyiDap0xm2UYoe7TseE/UqsZ/wsfsfYqG4TKl5kla5iO3u81byl2H+WgA+\nCvzPwH/9zY+/DfiTlMUvAV/5psH8KU0esCmE+BOavHvZwACoFg06m12adYmCVAc9Y0/uMdpsM5xO\n0BSZ40aJLWmAcBc4f/JJXn/9q0TJLOcuPsJG/HmMSYM48LEWyjjTMXvTHnlMCoUmr73yWRqzC8RG\n54jeudlVep0bXFz4Dg7jzyEpEbo6SywchBwgK5AlGYQJTpzxyOPv5MIDp9nd2sOZhCytXqDayBO4\nAY16jULNRFEkZMVCkySiKEVNIpLIQU4jVMvCcaeILKVYKNAf2Hi2D+4IqzVPsVwiDG3c6QA5KxPE\nAlU1kaWEOEuJ4xQUwfb2DsVKlZ7TJ0xDNto3qIcWRqUEQsJ3XUwhEQYRSDJWsUyIh2oYyLVZ0sAl\nCyYYpSpBHCFrBj/xj36eietyevUYjmvjukPSNML3x6g6/Lc/9T9QyhV57B1PQSrwgiGvvvAStnuI\ne7DHSy89g35eJ1eoI1QJESjY/R5avohp5Nl+4wqqatHvj8kXDrm1fZN6a5nRpAciwukWGEhd/tMP\n/Dy/+OmfIUpkJHNEZ3cd7klK2SIgyd2maTYwpG/hua//Hh99d4n2rS1WTlxi++A2ktpDuaVhnp/B\nVU1UcqSuzcjz0LKzbI2usDijoyWLrJ7x0bxHuDP5DAtzjxGGDkbRJBipzOQLvJX8ZU6W/w34RxyZ\n9ODfRZO3e8/v3pImz6rmmZ85xeFmQpYVuLBwiVzZQy5JiMRCkYrIKliFCkZS4PqdLS4tPoCmhVy7\ndhtnv0moeZRrVXJSk8F4ypnKHJ43pjeaMHf8EQ73h7Bz9OKH+zfJ5XSG/TWUYp5yocH4sIOqyiBF\nREiEUYQbyfwnP/FjnDt/joP2mCwOmV9eZn6piTsZocgSRtGgVj+GQEJIICRBmkCYeMSJRxb1iUKP\nzInQrAK337hJd+2QN67fYOwFtDtdbq3dpL23Q+i7pIogywRh5BGmUzy3j+f2mI63uLm2iaEZPHjp\nKax8DsULKR6/n3TgkmUhpqkSRzFCVtA0jSzxMfI6WRqjqDqKpkMmEUw90uwuTvLMbINTy4ukSYqi\n6ghZRkVGklNKxRolqwqZgioyPG+APRlz/oGHkEn49r/7k9Sbdb70pU+R+kPiRJDg4UYu7e4BGxt3\ncN0BnYMOm3fWaHd7TANYv32F8djjWu8bVFsVuv4Gl299kXI5x3ve+3EWaxcxj02O7Fm4W8J0HmSU\nGvi5P+D0aZOb+wfEyxOcaR+7O8INS7ymeHimxcGwy9X2C1x1bpPnDKQ+y9o5/EmMm27hTlMKlSkl\nrczu1k2KjQJCilmYX2Zz5+jL9V75dxqLEOJbgIMsy17liK2/Kd/kX/kr0+Q9/5kbbH6xzfCGiuhU\nOWgP8OIEu7NGpbRC6AX0xwd0+zHHz51huLVGIns4/pDV+xoszRn0hgO8fszhYERRaLTHY/SgyLQb\nsfbyC0QouP7ROku9WKBVvkBP+jpqSSApMnKlcJfmOhZYeRPfC6k1FkjcPt3dq/hBj3K1iheF7O1s\n05ipsbA8h2WaSEqGkGSiOEDKQFYUclYDVTbIUhlnZAMSgediWhapLGNkgslgxHDQIUplRhOXSb/N\nuLeN7ztEvofvpkipRJLC669u0jvwkTJ44okPsbvR5/KNa3R2r6IuLyArBu60h0SErAhSSSbNAty9\nyzjddVIpIzezQGVmEbOYRzc0DMMiiROSLCMTIIiIYx8hCYr5WfK5IkkSEUYOQeChaQY5M0eSurz7\nqe9kZ2+Xj/3NH+Hc6bN89fkvE9pjQBCnGrpQccZj+oMhqqayvbVFFPmUqy300hKWLPjs5f+HYAi+\nJxPrFmkCE38HVZ1Fjd91ZM+KZgkCA5053l38eyTKHBWjhhzfj5ArKPkim9d3eeh0gbVrt1jNneFM\n4378ICOYgtd2mAYRihLhdOZpFU6TaB717Aw3Br/PK994jquf3uCzv/NlelfeGrHi3+eGvQP4NiHE\nRwEDKAohfoX/ADR5F74rT1FKKOfOsMIym16fnNIkN6+QG2UkhYC1A4cV7UlqtVM8/HgVVTtkzihi\nFle53u5ghwnpeMjM/AQnyyFtSyw/cJrtOzuU5kN6Oz4PP/YUz4/eRHXsdDP8Wy9zbPU0035Mo5wn\n9CK0vIEsRai6RnnmHO/68PvY39zg0rvfjdd3yFfyiCjACyLKpRaes00augRe7i78qZSRRh6SZOAF\nARIJslEhFxsEWYiIMorlMnu7G5D6aJnKzNwMMglyo4KQVSRFJ8tCklgAMWkiISsRp07UmHo2zz37\naaTY4Pt/9Cf59X/7z7EdG6m7y9zscdAKxNGUxEuRlDH63BKi/DCGkqe9e5tCpYHnjUkzwd0wMiNJ\nItI0RZZlfN9FUzWiKEBIgsGwSxj5FPIlZEkjJUOSdaQ4wnf2qBdraK0ajzz9tyhc/iMkPWQaW5ix\nj217xI5H5AeUCmUG3X3at2/QWl7hD/7gN3nfR78Ls1ZDpBMahTl6m68gxTLb7RcYbUvIzaMVfK02\nopCkIDX53evPUc+foGxB3TqB73TJYo+H36eTR7ByIuKNl6/Rui+HmgpCWyDrBnVDZjJZwJD3GdtF\nzNQhtCZUpsfQVly0hRpPPfYDPLv2z7n2pb/4dPn3kRn9DPAzAEKI9wD/MMuy7xNC/FP+mjR5zeQY\nqiIjAnhxeh2tCqP2dSYTHd+4ieTrjIVPsXzA2st9DEvh+f1bpIlNI/YYjrdZKSyRiJCDdY/6ckD9\nTIO1N65yamWOUVRlZdZkd/fyEebbB84/yfPPfB3XzlEKI8aOR2PhOL7vYKg55o+dptqokSUR9507\njyioRHIOTQ2RlBxGoHBwcIP++g3mFu5DSBK+N0IAaaSQyWPkJCMIU2QkhGaShh6RKFBppZzTJYb2\nFEVSkaOMLMkI0yH1+gKGWUIkCUmSEoQ+Ig1JFYModVHTBM8PcbwBv/pL/4r5mRLba3ssLYXoeZNq\nsUGGTs6okMgJiaySkdHt32L9zhsMx4dU6lWqtVV0HfzAISNBFjJBkOL7Q+IoI18wMdQi1UqL0biL\nLKnIioZERhI75HIFwMBzfeIk4h3v/CBPvOsDfPULn4SpQ2/PZzruEgQJQeBhOy4TP2Y06BPJEXbH\nIU5SJCHIJBslk9hxdlhaqhCOWsws6UyCo23ygReRlMtIvoHrXaXv7aHMPcaZ+iwXL/wNfuuFV7m+\nnpGaGxybOUt9QWft5jqBo3PmvkVmjWN0p3eYHExQmlA0W5xZucBnnvktPvDwR/nalU+hqAqfufxT\nEJZ4K/mr1ln+xKX6Wf6aNHk5b4ZxusHxxRWG3haHhyPsdhezeJJ0NEbTK1SqLbZGHS7f2qFVSgmk\nPHOLTRw/4sLie9gYXsYN4eLjTdZ3D0nxmDmm4mYjpDTBsYvY1voRvVcPv0RtXkNSDjk1c4YLl57m\n6msv3W1ZXznN2JvQyPL4oy43d7ZYPH6KnGkRBQ5FXQVtSqlY4uWtPWZPPoGu6WRZDuIIRdMIgxhf\nmHfb+0MfKQ3JQoGRl0m9EqVSGaO7z3Q0oVCsUchbDEYmsfBJvDGpLCFLoKJwF84sQaSCKEoJfZep\nr2HZ+1ztJyyfKmD7Dnr/AEVREFlMLCtoko4W33XJZCRax5apVGrEkc9h7zblUgNd0/GTGD1LGNsD\nFElGtYooioIkZdjTMYah4nk29nQPTTMw1LuuWRC4FEp5Ij9mOh1QrTb5lm/7fl569VUUSearv/vb\nuN4Ee9TGc3x6fsLs1GDg9Fnba/Pa9T/Gt7YpJ2eZhGNmaicQXgUn7iAJG88/6vF3gwnp/pi58v/L\n3nvG2paf532/1evu7fR6e5l6p7EMySGHRZRF01ISFUgyJMUOjMhGAhg2giCglAApyIdYsQM4sQ0j\nsgIXmqqmRFHsnMKZuXfu3H7POfeUfco+u7e1Vy/5cAUodDgwkCgYBfDzaX37/xfW+2C9/Ulx3BBP\nDdkQQt5970948+Zv0o8OqRRXsbJFuqczNCshl9OYzy8wPR1xlLWYKy+zdqlCxVjisH2dw61HrK7V\n8VoielAijI4ZuHlE6Yfd9v9HZMmy7DvAd/70+f+1TN5e5/uIchG3mLGU32S8u83Khs3+1ox8XYRw\nTKN2jc3KIoPZHzDtjdALIkeHfYq2RFKIyEtFatUyirnK8sotZj2VSRrQ3k0QMpDzHfxxGYp/tgI1\ndmRqBqhmif7eEdNLPs405uqHPkEWydSNEjPPxTDnWF80kAWR5u4tFpfWHy+wCHzc0ZCNS9eo2DXE\nLEMUUmI0gmiGYRVRwhRPECB93A2gRwZSLg/SmDgKqZZr1MpF3MkUWdCoFqoE0TFh2EdERlUtkixA\nknSkNCDOZCQ1xJRsZDNhFkaEgUmUePTahzQf3eP5lz+LogQkqFh5G2faxtBsnFmfwWkLUfBQJI0s\nnTGbRgSKTRJEPDrZZfP8E8ymE2TTJs0yBFFEkVWOT7YoFVewzDxZJuAGQ9Io42DnTQ6bh3zisz9P\noZgnTRIiIp6+egUEiSQVcCeHNPd3+MZXv4Y7GSAkNXS1wqXL1xhqbUryZYaDPp4tkbo9ioUExASE\nAEH8YZmQJEjRdQPVyKHrNiUjx1SaYRVz5NIinZ2MdjRlfdXg8M2M8uKEmeexsbZMZ3IfbSIh6zFO\nlNEbfpsziy9TWKmAP+I0vs7Y3UaXTeYwGSTvH7NIX/rSl/5ddv3njl/7tV/70me/+GFSBDJRIhRO\nsHMlxt0I9/QUR+pi5uoMO9tEQpft/QPUuRTNytiYW+TS2ctsHdzhidWPMRIcMv8m6tTmaDpFiWPc\nnkthTUQVK6jAnvVnmiur7TJlWaOemCShyOaZKyytnyeniRA7lAoGgqoQ+2NyRo5YrUDgoWs6uw9v\nkKYZdrlBo75AnKRoBiRxhibJIIlkiUiKhCbLkIEkiSi5CkmUgeAhiSK6ZUAKciZjFQqoskmKRxgK\niMLjyUzVyJHEY4JMJvA9skTBdydEfkqvPwExZW//kCAUCGMRQw0xdIswcGjt3sEbndBuNxl2D/Am\nR0iyyIN7d1HiAHfiI9oKWZYiyilhHOHP+iCkDIfHpImGoolYlk0YRyiKTgpEQUi1MU9tfoNvf+33\nmfk9FCkjTmKKxRoZCY4zRohDFFNm++FD0GxmgxFDz0Evn2Xh2kWMSoY0abO4+WGS8SFFs0EmJPiJ\nz1rtSXb7N9j/v6jmnhVr5FWNMX3OlM4gCzO8KCTqTdFFi5JR4LlLn6Q/zAiMCZHuUdDzGPrjkQPC\nEWP5hIq2ytj1GLQOkeMp9x7+gEweohgZLjItZ4zrOZy8E/ClL33p1/5tu/3A2l2Wa8vcu/M2mxtX\nmXgm90dv0n00JrXAjIocPopYXzzH/sM7rKyu4py6SCWf9vSInJdn6dwyB5NTypbJs9VfZWVpm1//\nn/85EyVk/uwiLeeEslpnlvywRmAQ+Twcu9Rsg6JdxbAFVDHj5HCLLDEYD/fxZxK33rvBpQsXWF5a\nZDx1mI67rF58EcNeoNPcQs6JOKNDjMI5Qq+DWlhDzTL8aYik6AjCYzlI1wuR5Mfn6qpApqikXoRs\nW7iphJhGqFoO3zUxVIMk8pEVEyFViDMbOXWQYokw8pATBbPeYDzcZzgYU6nVmA1OiKwyjw56JCnM\nz5c52B+hKiNCb4ozGxHFCotTFzkz+N5r7/Hxz7xK4maIakwWZjSPdrl/85sYeoFnXvgMqXQIwgIZ\nIaQSg8kUSZQxLIPI98kkmWdefJXvffuPefvN6/z8L/wS5dIyCDHDzjbjcZ/Vs9f4zOc+z36zySxy\nsIwCSxcu8tM//knCTOFv/nefZ97u0UpiqtFtTP8ccrGDU/gtnrmq8K1bf/bNTFNiOhkiJSbklkGQ\nsQWdR9ObDDii3wzpRgcMvC0IReaM84S6Ts0qIdef4Puj7yLvV+hX7pMrxqxcXGW+/GFOvSN8aUTV\nvsJm9SqDwV1k4Zh3mPCj8IGRxZ0NQJxwd+8mUXxICCyuXmLqnuL6UDDG7O+00VcF7JlHrMNotsez\nKx8lHTTZOR1Srb3Ce3f+JU99epFb7x5wbfMFDts77HgjCsMKx/4+hq7/0LlJqPPq8x8nuPuAwWjM\nyaMDHHeCUTDY39nBGw4oFAw+9ernmV9Z4pt/8GWefPY5TLNIEMfE7T0EKWDr5n1qS8vsvned9fMX\nyGKPNAnJsghNtYjCABDQbBMZAUXL4QcuSpLi+jGC5yGikwkiomJSMJfxJ4/IF4sEfgxZgoiMgoGb\nPh5BTrOEwcEWEylm48I5To57PHP1Cve3t8iXqtx/7w6i+CLdaRcpVYm8ISQhBx2Fk+MbrF26ghQM\nuH39DpqeoWgRxco6spxhVc9x6eolcvU54tBDFmL6/QG54gq5Qo77d96mYClcv/4ml86d592bb+KG\nKc889SyWVSJIXG6+/q/wI4PK3AL93jFZCifNPf6TX/5buP6Q3//q18iST2Co8A/+zu/xy7/+PNVG\nnrFTJLCOEZgxmmbM5X/YLE9bI4p6jefPfoG7O/+UVjChZm8SJD7eYIQ4q6LoMnPRBurcBudXXuTm\nw3/Nfv8G9/7wX/DJn/pJFuRF7jV/kw8t/ww3Tm5jl1KefHaD02ZM0HIIjD6B7/Ho+P23u3xgbtjC\nesrwNGCWesySNsppkdjOSNQAU6lSzJvEZorkGriRR01bfDyS2u0zbk0JPJ3N2jJbrfe4c3gT3XyG\ntVqJb7x7m9mhg+OGiNUqhqTSLPyZG1bY11BUj2ef+CSXnr5GEvrEgkKSiFi6zdWnr3LlyRcwDIXm\nQZtet4UsWkSJRBKOUGQbo1SikC8yGrWxyzXC4R7jQZfBdIquysiyhCDIZFJGHDgkUgJJhCrKJFmC\npmsomk5KBoKJEEQIkoKWL0IoopEgSBqKauJ6HpokEyY+k/GI/ZZHpXaGi5vrlBsLTAcdetMAS9c5\nPWzSHQ4YTiZUqnNIssF4MKO2tEipWsYZpMSJx72tRzT3DxG1ec5eXkMQy6ycOUe9XOe4+ZC9nbvc\nv3eLe1uHhKHPyso6D26+TrFUJYodOkdHKKrIZDggCoa0mluEsUe3PaZgqVi2QJaCqOTp9dtYmoUo\njjFTh0tXPoYoCKRxwsQNeDT4OtNRynjiU5KWkL0Ce8cyj7I/k8ornujoOZ/JqM8oHpPTUgR5AVGd\noKoVXn7pr3ByepMnll9CUnJ4fofxoEexfp5yucCwPWQsx9x/q01uTmQYHnN6fA/fneJEexSyEoqq\n0Ekc8mWdu187+Yvlhp0EU2jkCEKHaUdDnU6IcqcoqU6xaDL0TlFmKpZRxx+79KL75OICoqRwJJyw\nlDPZOnkNNRVxHZM48flXf3IL8jI5ZZFGpcHuzi6zOfWHzs2bJk+sfYJizqBUrtDcu0cYZxDrOKM2\nWTAgDV3qlTLFIjTq67TGLubMp1jUKZQaZEGCma9TSgIU3SaIXQo5mxiV6bQHkogoWGhyhixBFAVk\nYUSkiMipSioLJEmGJgjESCRBEy+TkTKBNBRI0phMFUjCACUJ6Q67dDtDWm5KKio8e2WNo9YxQRBy\neNpjfe0sO7sPkaUQaTpicfNpgtmE+cU5LLtAEgvIUo7D5tvkLJFKo0rie9SrOjnV4Le/+lU+/PIr\niBk4bsTTz/44mm5x4/q3KeQ12q1jVs5dpVpd4OBol0iM0QIPVRfZ2R/TqCT4wgPCzKfTG3NZydNY\nLLK5eR5NsfjaN77MtYtn0JWEkAAxTbDtPJ98/qf4ozv/iIJqkcgalqTTHA3ImeYPCVCBROKr5Irz\nhHHIYNTlQ+sv0B9JjLwJvdN7nF39GP2kRKd9h9To0go7nJmUidji6JHK+XmX/ILGQXeKFyRE8Ywl\nu87pCCp5i8A4oaJI6Poi8DY/Ch+cG5aCMAwJTQ9RtcivlzAFG30hoT3aI5fVicwEfzJlfukM41GT\nJFPwU4HnF15hp3Mft68wnuZozG/w1r33iGcuyAJifsjOwZT5ywoj54d/q69e+wRXz11kd/sOgpSj\n3thgPB1DEqLrG7Q7J6wKGqpRwZYaLG6UuFpf4t/8i3+GIVcIEpUk8giCGG+W8ODW6zQaFTTZIMlC\nbLtBGqfoeZUsSQhDUFUT0oA0SVAMlak7JvViwjhCCgVIbUxRYOYMcT0fRdcRvRmdVhc/ybh3NGHm\nBwihRK48x9vX38bMa0zGKUvzNR5t32NlbZULG5tousvu3oi5+RqaoiIJOofN20iyzYc+8iG2HuyQ\nL1i0e6c0j9tcvibx8c98ntDXufPeDZ585lkKpQqKYVKrzVGqzDM3v4wzHWJYBdbPv8g//R/+U6qL\n8yyUqiDKmEqB2HcJvRDJkAjTmMbcMr4XMjc/T+pPOeiMufzsT6AKKkgpYegxX5/nTHWDODyHG9/B\n8UIWG2dJp/+WmJFRpFJaYzBrEcoOf/XMF+jZ81w59yu0+w/5o1u/gz04QcnlWFFrGLVnOTcHllhk\n2xHZvLJE0PKYGLfJkFlcvEjFLvGDrW+g2LDPfaSJy86tmA+9UOL9ILxPGeT/UwiCkP2l/+oypycR\nhibTbw6QjJgsFSkv5SGFTErxfZe5yhz+MCYUwTbWONi7i1GZIs/qJOM+YskgFSRUS0WUMpy2g66V\nQesg2jLhVOIHSwf/7kv9e/yFxX9z9u+ydfgm077P+c01FgKTLTNGQ8Qd7iBIkKYVfDkgYoQgZtjW\nAMdT6Jw6XDjzceLIp3tyQCJkDJyAQhU6swmik/Dkxkd46PwRc/IV9GKBf/wrv0+WZf+39q4PbFKy\n1Zsy8ydM/S7VRo7Ujnnxo38JIdUZd8c4h6DnFcJZzGDkMx0OGDU72CWLqnWBLEmI6yJRT0W0E66c\nv0zqd5GrKZk5Y+RHWCyyMnfpg3rFf48/J9zcfYur5z5ErEr0evf4wfQWZuyw1XlIaiZ00wleInF8\nfB9FWWHmDQmiM9ilHHqq4s2ahM4DPvzSr/Dhpz/PmdoqWZSw0jjDy8//IvujDlX7KTqDU2r5pfe9\nxwfmhhFoxOMMChldcYYxKvHd1/85uXwDUgu9YOK0TkmwUOUcohJjJQ59I6JzdAiZhjHL45Y9pEjh\n7o33GMQpxWpE4kfUi5sks4DDzqPHTTf/Hv+/Rawc8P2bGetnVV7/9ilhvUff65KSks+/hCOMGST3\nOXvhIjllxslJDzWqkaQSU61LtjugVH6OB3e+RbGyxLnzLzFuRihoKMKMZ9afoDW9izn/FLfe+vb7\n3uMDI4s3bJPERaraPKkfEOVmaNkqoppi1yROjptc2rxAOxpiSFNmnQR5zaXsFOgzZeOJCzSH++TD\nAn4nJhQSyrkaBzdaLF80mE576HKeXHWOn01tBtMUsyTSuqP2fD8AACAASURBVD3lL3/8M1Qbc6Re\nSP90ih8M6DRPWFhZopQv8GBrn2dffJXVlRV0w0TWM5JIJI1CVNUkTQOCMCFnaARRTOe0DSLs3HyI\nbohIsg+JTM7QiVNI/CGqIpIkKVbexh2MGbTGRBkMxz2iNKXT67FQXcbO29iCgeMHSChIgsTJ8IRB\nf8T6U08wt2Bx68EWdi7PYk2HdMxvffkHXFgrctAZ0+17jCcxsqJyfsXkxY88z1uvf5dSocJTTz+F\nkbeZjPqUixXu3b7NYc/ll//63+T6zXcYNm+xv3NIY7FBJlp4ozb77QkvX3uefF5FU3Teunmdj73w\nJAE6f/TNb+J54AYqeUPg7MZFVtdtQj9hbvMiT12+iGrYqEaOydjFyudZXz6LJMnIskwQ+GRZxn//\nT36d5vgt+u4RU60Nbo16dZ2MHl7kMmcs0kpOEf01Lj2/RNJ9mpKRcRB3cNMQN2qRE3IkzjuMpHnG\nJxHVhQnDkymIJdbmX+Q0eETn9jb5y2W29iRWahvsen9Ac9ol6Gic+K9hGA2k0p9Du8ufN1YvbNDd\nmzIZ9/DbAfa8gqVLuMcDciWBsqVwfP+E6tkGVbvImG2iNGSloqPkcnS6x6iBhyIHpLUEzZQIBjHz\nZxZI5QDZU1FFl7IJEzfGk2co3jwLaxZJGOCN2jCJGI6mxFkIqkK31SSc5JAQ2L7/Hmsb61iGSb+7\nw2TqUl3YxA9dVEnHnw0xjDpRmHLwaAtdU5lfX2Tv3jZx6FObq9I5aaFLoOVVUlIkU8HzZoSCxMnw\nCE2yCOMUIRFZrM0hywK2YaPJeQqmRiTERGnEmbkSw84xkgT7x12STOHksM83vn6XjfUG1WqBxaU5\n7h5OESSDmd/GljJ0Pc/enTs4oUo48Gl1PPzdLb731i4LlSL5gsRp2+MP/uX/hm1aHDRPqTUW8MYR\nlu2xcf4pVHWbt+49pFpvsFBMsWWRxC7x2nfeIBUM8jmdQs1AlBRk00JIC+jWjJIu0T5uktMMxFyF\nXMFk0D9lffksURgjICLLMmEYUrV1moFKJkMpyjNNFGQnJtRN8plMkojIjkon7hDEMNEeIjll7MzE\nDzqIogxOgNT/KGyEvPhSgchrgNMlHXXol/dZNDbxrmhkhsckVpkEA+ToSWZ9lVhpM1es4p+YlOeu\nAF/+kTb7gZFl1nPQTAFbMxlUBGwxj+eOMSvzTIQBBArGWoiiynSCJlkMkqtx7Lp40oRCcYmpI9EZ\n9chX8hAFNJbO0D/dR9B9fA/GQcykO2aleo5k1iOVTNaWl4mQyWYxSRqhIhOmIWGc4IxcolCgP3VJ\njsc88UyLvGay9eAdVtY+RBD4TNpdps6AfL5Iv9PC9xK6vS7XnnuJ7e1dTvYfoWsyqT/BKOaRZJ1x\nc59UAllVKVYtnGmC44uISoKtiASpiKUXsfNl6uUGimAQJTGWJTGeDojEhMLCBu++9wOkfES7JdAo\nj6jVNCZTl0Je5bf/5Bbz1SquP2F+vk4pZzHLdMoFm9uv7WPmVQ6O3sDxp1zeXObaU0sgqEy8+xwc\n91AVhzsPh3zxU4v0gwg7Z5O3NK7fPSUVUpbrGnn7DDduNDFv3+OJS09zd+sW7faAwWmXF669zKPt\nbcqlCwQTEVU6QlNFvvJvfsB//Et/ldO9Luef+zSOM0ZRNbIwQtdNNE3jp3/iV/m9//ZfIysSc43L\nVHMZw/4OQlZHDaocz3bBTAgSgbyco0AJV5PIyWPCWQ450JgWuzx6912WrBql2oRZEKKVAhaLS0RZ\nSGf6gNTPkNIIPa1gqCW85B4l43NsN9uISh1N89hufuV9bfYDI0uloTIY2jTmRIzZjPHIRVV03KSL\nKhrUaznS/JD+6QCt4DNfNZh4PmI8IwlUBtNdpImCKRhkaUpJXOS4u4MoyFSiPPk5hYnzWCK7P76P\nktWJo9tMOypPPf8KmedxsLtHKkMWJMRhRn8aMvUSDo5HPHH1Gr1eF1X2mU4yvvet73L5mcs097bY\nu/0emlGitjRPY26ZfL7B0sZlYtnk9a9/DUFQUOWEQilPQoIYCRhKijcdEQug6TbNVpdavkZeNsnl\nZWzTomjl0WwTRbWQhYxMjDG0AulkyszpYhQLjAY7HLUc9lsCuqwjqy4PbrfZXF4nbymUChaqZeEM\nB8ychMW6xXy1TLUs89mPPc+d3X3yos/x4YCAED+Tqdkmo3FAtahz2HZJwoAFq8RJu838SpHdnRPS\nSEIVHdY2F5AKFbYPH+COPDZW5rhwfo4HW/dZXd9gOgUzZ7DdPMSdhvz0f/QJpm4LzS5ysn+TNJ5R\nLi8xc30W5hcxTZNSrkCOAhlDzGIDcTzlyFdxRqcUtQHlRp3JdEwiRAzSFFuKyOtn8KaHqGZCnMyY\nVxYxXxCIx3sE7iq2rpEOEk76W5w79yS9SKDvtCireYR6xKj5iHv7Pa49sc3a0nn227eIjQRjuAls\n/0ib/cDIYhrrXL54jvfu/D62olNbWSCYzvCFEmE8RhFC/EigtiCiij7jiUnZrnPiR3idNkk5Q9JU\nGqJOezxiz91FKmSoaPiRRThxMOMS05mLb8UkkzaZp7FiBuzvbZOGKVHg4LshshgTeROG4wGZoEMm\n0O30qNbK3Nu5TXPrgLMXnmHUG3HwYIdbBy0y95CnU5EkiHnlCz+D63TwxwN+7Bd+ia/843/EbJbg\nekPiFBQpxQ1iMiEj8COkxCFn63gzHy+esHblE6ydP088nYGYoOkyYTDm8PgY1xkjyApe6CGKGfXF\nZSSxQ2LN8eDebbypxLn1SwhKxEmrjW3LbJZUukHAZBZCpPMf/uUPs1hf5Ma738cWTIRU5N3mERcW\n5vjMRy/y2vcfcPlCnrdvztje2+PHX75MQU/xClXKRx2e+ezz+CJo9grnVx1mnss33n3I6mIVTYJp\n75jFRhVV0VE0BT8OmTohhbKF46QUCjqtZhexlGFYOQyzRD5fQBRFPM+DVOJ/+tv/Bz//pc/i3HwN\nRTOpL+aZtrsEXoWJs02Q2WSuiqIFFHLLNMwGB3GPmlbFJEBI+6RBh3rtKfrOPm53RrV6BSsrcjod\noNl56nED1VwgHvZQCx7PnnuZ7nQE/pTq3GX6g4ek5vuXUj4wssyRY3ByhBLlccZj8kvnmaW3wS+R\nJCqyrpJXZCpCgb2phKZCfxCwUMkztiKG/T6+5XFyPMXMS5h2kcT0yXyQPQ/BCKiai5hmyKnroeSq\nJGbI63duIxNzNGpTmNrEYUqURRwPIsxEJRJjMkNkf+8Wb799gaVGDlSF9975HnONOWTb4sziGpJp\nkK/mcNwep9sPKdSqzC2v0rx/l4995sd57et/jK6rpJJKFKeIeCSRT7lqE7ghvusyd3aFn/oPPkca\niPRbJ5webqHqNorapHl6iKHlSAUBS1TwgxhT04hkAbVYw5R9Lp87y85BEzOnMh65FKuLlCsxk0Bm\n89KTnFsuEM1ccmqKkVNpdUPSaIhmaKQTh0O1x/ANn9X1BW7f2CJXsOh1Z3zn+j4XL5vcfe89Xnjh\nEucvnmM4GFMs6DxyO2RZjKwYnJ5OGXYnCFaOl68tYpfnefMH30HWqxRzFvghvXGGT0jeLjLw2sxl\nOggCaRo9HgUQJSJBoJwW+Xt/47f4L//Zz2JnCtuPRtg5C3Gm0wxHLNcNcmWL6dBh73iHjuJytrqI\nr6vIqcrJ6Ig55Tz98RZPXv0cbpjRGd/mpfNf5GBwl5ws8HDWxI8dLHmenHSOWB6xGFh4UUDJTSmX\nznP0owd7gQ+wN2ztOQVDFtGkmI35C5xZucKj3SZ50WNpfYlZGJDP5ZBNG3cWs9JYZjaZUioUgBym\nJtEZjHEnAtWCgZ+ITPZdDFUnCVV838WLh+RKdY5be2RRRuyBjETghfiCS+fEA1GlNwnwvZiT0YRi\nRWfUT1C1Au2TFq3jR8SzCCeYIcoK+BpxHLC5UKFgCYio9Lpt0miGKSjUFlaRkhS9PE8kaui6yHDc\nZTqeMRh28EY+BS1l7cwFPvHRZ5j22iiZTH15mUK5gjNxmWQJbpQiqzbD0xGipiBKOqpVobaySprF\n5IpVVMPAmc0QBJPJdMbS4hxRJDLt9Ljz7tt0j7usb6zyjW//CZfOnuH7P7hNfW2ZUXuELwvUyzpC\nPMF1As5u1Nne6/KzP/dXUGUPt9ND0ATyYsTZjSVu3n+Aolu88eY7pKLBfrONbeoYhsZcY471jat8\n/Ru/S14vksQJURTieA77R0coks5c5XGgvnnhCURBxg+miJKOadqossTeyS6CInK3/xpSYNBqD6lr\nBllBpFGvk9MMJEtGza9x5ex5hEzGcRISyUFyMkLBx41l9GKeTqdJv3PIYHxMZ+CSy/LsON/DdwRW\nFs8hJinOVKTtH/PU8kX2wh3udvdRSyXak3s0v+39yN6wD6yC/xv/8G+QJDnicEokmeQMhTDsEerQ\n67s0T464cHmFo6M+jnNEZzRBVDRMUcWJXOTyGKU/R3t6gmUlCMwTRw6z2QxB0UmzkLyuM5hCTs3j\nOm1KtTXcYRtDLxCoHuI2CKJCpzMht2oyr5TxA4WZ61OpLtOoSVgyHLd6zNXyHHdDLp9fI/IdapUK\ntYVVzp69xPXXvkouX8JQylTnG7QHx8TTmE77FFHJM/PGOH5APO2xc/wIIbL43I+9jBgI5Ap5LE0j\nZ1rs7T0i1jQG7S6aaqKWqkShhzfqkqkmH/70x5lMutiVEr1Wi9/+3W9wcnDE/LLJwzvHzJfz6HbM\n9sGMTz57hWbrAbNpysWLm+y0XPJmxrNXLvL6m69x+0GfMIyRZAHHmaGZBa5dnkdV4a07J2Rpyqdf\nPIsfTSlWFynlc3TGQ/qDCY/2B+RUBUnLEGKRXM7gQx9+AUmweXCwhSTLaLJPtTjPN759nfOXLxKH\nDqtrmzx77UXm5urkciXsYgPfGVEs1Hj33e/z0vOf4m/975+iN7nH6UmJkmiT5WfkpTJHpx1kS0WT\nQEstQvGYi6tP8mD7EZuXLhPtT9niPmam4ykzDDsmGhcZdATWF4p4gYRaiSmUHKJpHk22KEo17u3v\nMBmeslJZ51Q9pWBV+L2/8+6PrOB/YG5Y2E2IXYdKvYRuG4iyShKKaGaRdSXk6dIagqFg1Av8cf8m\nRlZGV03Kps5y9RKdwUMCKeVCvcJEtklCAVfIUIjwvISlfJWd/RZyZOBVDxCqJvu3TrF0nwkZ8nzM\nxAN5KmJvGqQnBgvPV0klC0OSGY0dhv2AWBOx8zajccZivUycBYydKRcunsUqFnFmA9bXzzNsHRFO\nT9jr7CNbJs3tU5bXFgn9hCCKSHwHP5WYq53n4pVzZFGMJEnkNI3scSKVmSAi+BGxVqReLHIympDK\nKa1+wPMvP8fNG++Sy7lc/84W0XjEk1c/Sjlv8Nqb71Gu2CDL7HZmXLxwFqtexHTKwJBed4RNiJQm\nnHT2aXU9pr6PrmikYoxkmNRrFpoiUVuc56+/8DIP723hOi3y9QUGk5TpbMDS2hrffu2bhH5IkBdY\nzxVRrYRISuj0h6jRESftQ5aWKpjFRZaXFpmrPqDb6WEZEaN+D3fmoyhFRDmPiMC3vvabLC6doTfz\nmAUOlaROJiSM4wjRiGl2HAIlh6WVKCwKzJfXuHPjbVKxQhKXUCSB091tfALy1jqj7kMq8+cZjA5Y\nzYckgUEwVqmtl1ksjLjr73D2bIwwzuF3ElQxQWtYRILGnFEhHL5/U8sHF7PU1xBEkSjMiKcJUTCj\nvjBP5IXoZplQlpkOZzy7eIlaYYk3bn2X7eM79E9M0t4hsRexVFtkyoh4AsN2TH7ZJAoVNs+s0+4N\nOHNmk2Gnw3gKohmzdtGi2wkQRQlRiiEK0TZtknbC6eiYBw98NjZW2OmkNOYWWN+wmcwmVKwc/U4T\nZzZCSKBYqNBu9zA1kb3jiEo+T3lumdgLMRQJVc5TN+cIo5DtQRNVlpnL5Rjg4BHhz7r4mDTKBTrD\nEUI4Y5BqiJIMYkLRVtlrn3Du0pPcvvUeTz95gaefe4LZZJVWaxvZnjDyE+79yXf48Ksf587dA3r9\nPkuLOa6u1kjoc7zT5/jYwTJhba3IbrNLlqR0OjMcx0XXdURRxtBs4miCoYqotsWd67eIfQc5HdPx\nIt74znVeuNDgyY98mj/62h+z1igzt1JltV5j6k05PWmhywLbD+5xOHQIKzKeF1PfnzI8PSbN+XR2\nT7n29JO0Rx38wCVMImxJJXAHHB+PidNjzFKRreYWpfxVfvazf5v/7B/+IkJB5enGEuNTibHTwjks\nkzbKRJbANfscB+0tZpMUac5AUHUGzT0S28b1RywV1zgdNdFlyJkTDpttOoUxolJguwVK2kJb3EW0\nAoxApzU8QupHKOb7S058YL1hURwymzj4QUyQxhTqJdqnfYadFtFkSrm4QG11iePuPmdKC3z6yU/w\nkadeYeHiKnbFwyhKhJKOm6rk8zmWzlSJIh8zZyElOdwwoTccEAkp8SSHpIZMOhFxKBA5AWIgoedV\nkknEwJ9iWTJeLD4OflOVfF6l1zploVrBNGzOXbnIykoZzcpx99ZNsmBKvzci8nwUPY8oCliGiWZU\nmXgOuqGjqjIrCw2QJAbujCxLSLSUOJaJMoGpFxKFAoXSEkEaM5lNCRPw/RgrlyeNAs4uLyPKJsc7\nd0iCHqWcTq1ao9+Jac+63L17m0q5RBTNaLdanByPmS9Z3H7UwxdiVpcK9LpdLpw7x73tHrPAJUWi\nmDNYnjN59bkVfu4Lz/PZjz/B1751nb2uz9bWKUphjnfvHFAr5Fk9c5av/eHX8UKRk14PNZoRukN0\nJeF04nFr64QbOy1awwlGGpCIIj0l4L5/wKEQcPXyBpatEqcptlUijlKi2GV/b4+Pferz1BsrVMo1\nyoUyn3/mxxAS+Cd/93Vm3YheJyJyhkiWzcJmlVXjLP6pwI392+CDbC5RV+o8MfcZWt2YYBAgRiqt\nWYeakcPLZgSWQioKBEKGoiWEZCj5IZ6/iV1aoaCUKXk2YmgRTY33tdkPbot+HGPaBnEiIlkWQZBh\nL5QZND26/QGu77N59Sxx7QLH/SMspcjTjXU0U+R05DM2Io7723hhjCSqCKFLzjQ4HPSJ3SMif4BE\nibAbUljQCT0PL4mwJA0/TiETCKUYeZSiKRqRCx/7+ItMuj5rywnLy/M0KhVCZ0aWeciGRqW2QOjc\n4sqT54mlmDBJOH/xHKZoMRj3H9d8sphioYzvzNCNApmoUK+mKKbGwJnROzxgZ7JPo15HkSVERSd0\np2Sqgq4JyBigQc4qMw0DTCFlEkzZeXRMvdEgb5gIqkl/PODcxhmuv3MfQcqoVKuPkwhTlxu3Bqwv\nzTGZTnn3boe1jTJS5vLSs+cp6ialYoV3bj5i5qW8/XDE2nyMLLSoVAsUcwXWNpdYXTuLXXhAuWIi\nKQZ2scCVpTlu3X7E8upZer02128f0B/1mTgxiZhg6hqyVibsDggzi0TMMGozetMZu80WG2vnyVIf\nTc+hKSLnL71Au71HpbyIblpIYoxSfKzpaCkiH73449zv/YBYz5CEkEnYZDZ+yKViBWnBolS6yNA5\nQRCKvLH7h/zEZ79I2OtzNGrhJR4zO6BSzdFQK8SKSUVZYRjfI0192o7HGcuk0+oR6xLdcMyZ8zad\npPe+NvuB/VkKpRqGrGNZOrqiomoKhmwyf+4sQZIgySInD3ZYqBZQtcfbToIs5Xz+LHJSIp8ILMiX\nKYl1htMJnq/hxQFFsYLvDDD8AoIRoi3qBMGM8NRCcAR6nQCjoBO6GYQxmS2TZjFaOaCiwGzYYnl9\nhdPDU46Pjtk/PmXsOMiqzJ3vfh87t8RRs0eW5tjfaXO0s8eXv/w7tI/7yKZKkiSkmYht2oi6QppB\npVimnLNZKVdYrNZIk8fTmU4YIisa/cgndHySOCMWRAYTh8ls9KcZvZhzFzbw4wwQkTWdh+99nyAO\nOGx1WVxusLmxiGXYuK5Lfa6InyjsNgfMvBTElNbxAM/rs3+4z25zyO2bjxgOR/hjl435iIsXl4js\nOgvVHC9dqqPIIU5nRFGJeersGvV8HtvQKOkShmFw0mwhyglzlRKeLyBKGcvL85QWdJgmtE89TE3H\nKKlERzKIIstLZ4njmHa3Tbu1w3HzPfqdfQ527uMHXcbDY+LIZzIZMZn2GY0H/MJnfpW/9vEv8NGL\nv4xuyuDrvPXuayy9+DTnG2fpDPdQJAPV1vnch1/lu29/jQftO2i5FVZXFdbXbMphA9GyyBVEJpMu\nFWMNUxeIXY07k9dRU4iHE559pcxBe0Tv4fsnvD4wsuRyBaq1eaQUippJvVjEMm00BDafuMrYj0l0\nk9HRY8npXK6IblTxpwGvPvlZzGyF2HepKiWeOfsUsjJFcm0iccKglWJbeUzXZhbOsAsZuh6RCiCZ\nKXECYpZRz9VwOgmFlSpqkCMVE1762FM8ur9Lq7VNlgjMN3KguYyGDmm5RPe0yZPXnmZpc51MlREl\ngY0LBSTbRpRFbFUnI2Q0G6HLCqVqA8+L0BWL2XhKGCcYskz7aBfikESUCdyInigwmkWEaUK90cCZ\nzVBkid5kxO72Fh966RqmnnDjrW8TxgavvPJReicDBv0BWw8O6A8GmKZO67DLYBwx8z3GIwddsyga\nBcb9mGFHZP/gmDAR2Fhf5xd+5tNElHjj7dvIoy5xEPLm/T1ev7FLOa9SX8hx1B3z7t0HKFqe3NwS\nVUPGKMQcNkcomo3nu2TA8UkLJVM47R5jqhYTPyEeZsSewOnhgF5nwJnzGzz7xCc4fnSHu7eOSCWZ\nUm0BQzOwDJ04CJh5LqqSQxJENCPjf/kHf5+bW7/BzzzxCi9e+UXickbzweu80f4Kx/4bnAQ3wTvk\n1vXv8eFrlymvSUTqPdr7Kc2DIaLlEngjHG9MgIDrS/hiiDmr8eql59kdHyFKIt/5ShPbk1hde38x\now+MLO3jQwLfwbTzuG5AFicUFJmcZJFFKaVqAWcwYjSdEU4GCIJI4nmY+RI793a4euECS41LWGqZ\ndv8BhlohIyVLc2i5kJPeIY/u7xMdJESZxdL6BZJYRJJM4iRATDOmsxFpEiIlGXPzFZRkxI039mh3\nB9QaNUp1i2JBIa9YZFmElBRYPnMNz5nyxvdu8uwzFxl7Htu7GQePDun/qdjrbDpDiFNmkc9o1EWR\nVaIkplSf48KZ86wurRM4Izxnhh94iPki+ZyKICVEUcBsMiYnK3RP2zjTMbol0D5+QDCd0jpu0e6N\nuPG9uzzxzFVUzcDxXAI/RNcFKnbCFz/1ImeWlpibzz1eTihE6PUy65fmeeraeT79yU2evlLhD77z\nLqsXNsgkEamSZ+oFTMdQ0HNs7x/w8Vc+SRym9HtTrp1d59H2Dk+++BRBWqBeX+UH128+FnZNMzYu\nztNuDUEQ8Gce/aMWB9tdhmMHRTbw4wH3T9/iq9/8CueeeZHnX3wOSzdZWFxGFDX6nSbbD18np2oc\n7N/Ddz28wQBtOc+D8Yj/+su/S/vha/wXP/n3GWYDPvvcL1CILnJp+UMMprsEdkZvNGDkuGhqQn0x\nh53puJLLbnOHLDvBMDzGkx0urz0FjQP23vOolyXEVGX5koUrmTiD6fva7AdWlPypL/4Ynj8jSVNk\nWSTwPRRZR8oEhDjFsCu4YQCyj+86VPMyPjKN2hxBNERXdDItYmf3PlEGk7GPnChIYUaMj1kykbMM\nyzaxcyLTwZjGgornZY+3pqcJYt1GS3M43REfeeoi455LccGm34t47ulrDPpD3rl9G12R0RWRB1tH\n1JfLhJHIfLXE9sEBVy4+RxCOOLdR52jvgFKhxv7olOX6As5wRCJq6AUbU7OoW3mUfJ7Npy/y/Mde\n4fzVp2ifHBF5M4LEJYsUssTHUCxSUqa+S3vYYdYbcPnqZbxZwHGvwzNPP0M/mXL10hle++4bKOrj\nhESnM0BRJW4/OGI88QnjmGbbwTRUIldktWHhZik3buzjewEFS6CkZ5TKBt1TH1O3KZQNJEHC1n2G\nwympLzK3Ok+5Uefk0TZBBnP1BnN1m0KjytbuIXoOnEFI7CdYJZPZ1EdQMuY2ighiRqgk5BSLcTvi\n53/25xiftFhYWUZOPLLIwwtjEmRyxSKD8YTG3Dpx4rC0fJlKcY7bW4/oHDc5PBkxHj/ir736n/M/\n/sb/StEe41WbXLCf5Z2b1zmYdlGdjDgpMQl7FJV10jBkOO1Rm2vQO9VJQ4lJMiFKVExZoZzPkyk5\nqJURlYwkTdh/o/8Xa2HFbDJDEUKUvIgfxeiKiB+6IGjERKRuSqVYebzMWnLpncxY3Gywf3xAKT/P\neDxGSS3MqgZTGAx62HkXISxREKvkizLtrogve4xGEcXCAq2TAbUFA0nJ6LZiDEMlEGLkvEAYegxc\niQtxQrd3wne+9w62LVJWFijVllBUiedfqNLrdCjZeSa+S96yefud13j2iau8d+8ms6nORuSiajqS\nLGIUy8RhStEsIkkp29vfp7K0QKk4T6t7gq5Z5PI624cHaEbGaOCiW0XEsIMbp3TGPe69e5NXv/CT\neLOAQmOBn/rC57jx/Rs8c/Eiv/PbX6U7GCNKEsMuZEJCXtFJsoiZ5xOMHqsql02Ve/e3mI1z+Eio\nekbJiOmHOt96/Q6Vcp6SbbN93GV5scBC0eDoxEeSDlncPMdo1CH06txtD3g2b/H6dpPj7v/J3HvG\nbJbe532/0895ztP728v02ZmdmS3kcpe7yyaKNElZVERFsSUIlmVEsWEpsYPEDhwjkQPbcRocJHKg\nAsG2nEgWJaqQFHvZJXeXM1tmp73T3/4+vZ7e82FXRsJoERr8sLq/nIIb59N14b5x/vf/d3XpTjzy\nZQnPEYgjl9TQmc1cCmUFTS7iRT6KrlGsb/BTP/lzLOsqBwc7mMTsPbiCIqgMj25w+tKPoxoyarHJ\nymqR0HdJAp8/+Lf/O5sna3zmI7/InY17PHPxORJRpN6fjwAAIABJREFU4F++8Kv8V7/831GQa3zx\n2m9x3f4GKxsSx8/8DNe+9XVWF1fodgT6+j2K2RkuHlvilnuZar2MWltEEt6GkPs+k6SEoStIfZtM\nsHBD5x01+wNV8AVB2AHmvMXciLIse48gCFXgd4E13uYdZ1k2fXv+3wd+/u35v5Rl2Ve+73vZb/7z\nX0HMBDIBNEElEhNUItZPPMLcDYj9EAGQcjK+ZxHZU+qlAoJZxXEtdvfvoddLHHUOmYc+M3+CLAuM\n3UPswZheBKakYjkuWaISRR7lhTpHt7ZZWlkhGkVYko/ckFlWahTGKmvHiixvPIskZww6Y4ycTpb6\nRGFAZzikuz2i3CpQa1UoanlcZ0I9XyZfKOHGGYIUc7DXo95co1lqsFYpcTgc0Gy36fW6vHT1W6y3\nK7z3uecwGw1kDA4PdghSjXJZ5Dvfep3RUZdPfvpDCKlOpdZC0BS8+T6el3K0f0ixWkdKY373X/02\ntztj9jojiDMkUSRn6jTLGv2xi6DDs489wqvX7+I7PmGSoSNjlmIUKYeiQqVgcDiKKJsySZqQBAKK\nqb1FwY9goWXgzn1WVxrEYUZzZQEvzLi3e5ebD3voqoAoiRhFE3s+o1Vr4EYemRwhpjGRILO+cZZz\nG2d47uxZWssbDDr7zKYH5PMVssjmja/8MetPf4zN1iJme4WFxZO4tsXR/psYZp3h8JDF5WM0Wm2S\nRETIfHJqgdeuX6G9uMSd3nUu3/gqNx9eJ9fUWWyv4cc2vlclFWwsex8/FtH1kDAKUM0pilBmuR5y\n506V5YUKbmZiphGCGFBXF/knv/gHP1QFPwM+8Dbj+M/Gn0Xl/TNBEP7Lt5//3vdF5S0BXxME4WSW\nZf/PMCSEVERWZaIoJFZVMscikhX29m/SWjhOHImQJiRhiCypHA165HMGJcnGlgJEBSQn4OXb38FP\nUtrNEg/vd2m0GsgLCyi9AeOhT0JMGos0axUGewPEpszMdkkEnyySyayEycjlmR+9RL5SYvveTYxc\nkUqtjSyJjIceqgx5vYBamdLf2yavy7y5d4f1MyfIlQoMR0NUSWTc81FVjd7REa7jc+/WFk88donJ\ncMS1m1cxiyUcHyYji/5wwuaxVZzpgGI1j+6rPPfYCV5Ox/TmEboU0ntzi8D16I36bJ66QBjavPKN\n6/zx179N4KtcOLfCzIrpHXUQRIEgdCkoNQQho6yqHNx/wMVjK9x4cMgTZ5cx2mVu3djh6HDKYslg\n5gqUchKqoqJrOWIvwskiREkir0KtUmRuz/E8n+t3OvzMxYt0hkfsdGYYOYksESm2ymSBh2hKHBwO\nUXICuqGTBDI5Q2Z18QwffvISgmSQpCGikkORcgh6jslgTNx6hMff96MgyGSBj+vbZCJIegklV6NS\njYiCgPl0hGoW8K0h9w5fp1xoY5p16vl13vPYJzh97nle6v4vGNEqhjynM7+LKJqQKyDbM0Yjj2Cg\nsrS6RGbUuP5gl1Z+EcHWSQ2X4dijUs8z1Lbf0QT/Ptuw73faDxWVJ2QyuUKZ7v4+DbPAVNTRFJc0\nSJnNhqhKmSiNSKMEydDIN5YZ9w7R9E00PY+pyBx1t1ldWuFo2qUzmhCLAkdHA+p1kzgJyJkGpmJg\nBQKDzhDZCGnnz9Gd3EXWCqRuynTocPzsGqvtRbZ2HxJ5PqPhnFp7E12FTmcbIc1jRR3ycoPyZp6Z\n5/LsJz6Df3SLoqYjFassL51APC0w8QK++Me/B2qRydxh7rlkkY8T+1y7fJtWPceot82Js+e5efUa\neqHCc6tn0CpVBttdKgsnSGOLgqJhtE7R7z8k7wXoqkLPdlk6fpLu732RZk3nta1tYs8nFTNUSSGJ\nIw76XUg1pnaKtFhmcOs2QSxx82DMo0WV2WBOTpcx0KiUTfw0RZIUJDWBJMUbzVEQeezCCfRSnSef\nepo7N2/zeM7E86Z85TuXybIQVStQbBjMZxZCpCBFKoLkoegGYjHF1A0K5QI/8cGPk4RjNEXBzJco\nFutIGyfY2bvHY898krMX56i6zr3X3sCoSWwWL1FuL/LwwQ2qtTx3Ht5GM3Isrx5jsbGGoSlMxwc4\nkUM5TFhd2qBoNVDQePHGb3MzuIwYxpCMMMoNjqk5Jrk8KgpTx+fefZv6WkQlqxDU75CEBcKZQaja\nDJ0eJafxjgb4Qf+GZby1QrwqCMLfePvdDxWVp2gFNFWj0qzjhTK1egkvyOM7Mqnv49pTRDElCGLS\nJERIUw4HAUkE+byC7QVEsUMY2mSCTCpEICkUmxXGaYwbq0CKki8hVCTMpkKQ5ZgIE9KcR3VDQarH\nNDfKmKKAk8ywwphnn32Sc6dbnF0pc+X111nYXCOnJ0ixznTSwfZSfD/k9Ze/wngwQMrnqS+2yOd1\nJtM+K80GP/8f/TVqUkick+kcDvC8kG98+wrjkcP1O33udAPOP/okH/xLP80Hf+RjTPtHpIHL4mKL\nJx5/hmZ9mVmoMj3cpVIsU24usXVvm1evvAHJBE3SIIPZyEIxDArFIqKUoKjyW4RLWSYJRRKthl4o\n0ViqEFkemevwgfdfZGGxjqjI2FmM67msNAukjoyi65SqNRxfIIsDNEnm5o1r3H3wkI3N43z78hvM\nxw56uULzWAVZlyjUc3jZHCl0uHhxEaNgUNcXqCh5/pv/+L+nUtRYXlqhXWsiIeBaA1RNgQw0VWQ2\n6TPuDjh+8RyXLn2M1uIxyvkaM9vla1/5Gpdfu8f9mze5dvMao2GHJJMpFRsYacx4ekjk+jTyJaJ4\nzNn1HwfPwwq7fOz5X+L8yac5OhxwZM3IpBbnz5/n2FN5FlcWaJ84gWeFCLGAgEx/5FLMTuKL3vdL\n9d+NH3RleSbLso4gCA3gq4Ig3P5/OSnLMkEQ/r2i8j77xS9QzBs4gUM9V+CZ595Ho1LCdiBXrONa\nfex5gGbksIYWuVKOYqvEYLBLsX4BQ9RBVJg5Nq6fkKYx5XqV8aSDFMqIkYAb24R+yMwNyOsC1aaB\noukIWZPugzFCLBGWp9zzQyqvi0yEHv/Hr7+Gnitz7JGP8qEP/CiXr76AF2XkSiVypoyoKzy4Nef8\n44+xuFohDV3K5VWCMOTsmfNkgoynaDz25PMcfutL5PMKnj/HrNRQpICT56ucW38S1x5jFotcf+kK\nJ594BFlqkAged29tsVivYrbajDOP3bv3aK0s8cjZ81QrNaaTKX/zr/0V7NGEoTNCNUNu3zji2pZH\nEHsoko5uKGi6hlEucHSvi2rJZFnIi9+x8L2HNFZKPPrIMkKc0NxYoL20zK2dK0iew+Zamx959lH2\nH3Z4sH2H/sRjbW2Vl25d5T3nz2LrPqoeM+gdQc2gZixx7rl1JCdi696bmNoSdtrjb3/mH5BPE0q1\nBrE3RlVzbyUVTy0C+4jNlVWi0GZh8xzu8CYyZURJRZIzsgw+9em/Su/okCj+GL/xa/+c0/UaSexi\nzTWUYhPBCxGThO0HdznzyBM87OyzdeNz6JoFus8rO7/LqcUPkRgOp0vv4c7e17g9qlHQarhuxsj6\nDvOOidjcZ3dLZ9IN6GQv0Vx8Z8jeD2SWLMs6b18HgiB8jre2VT9UVN5/+OOfpFbO43kxu8Mx/syi\nsFREE1WyOKLePEF/sst4f4/C0iqhZ6PLeUaTESvemDgNCGIfOYJ8IqHUN7n/4DZhJCEFFqQVRFMk\nwWOhUMCOXVKgs9NF1mNaZ9rM9kfIsoRYTTmIHTY2z/H4+eOkCVy9+gp5Q8ceBvQGXabWjIIos3bi\nNGvrTXrdHXJKxvkPXcRUy2SZh2vPUEoNothHIMaslZBzBfY7ByxUK9gEqPoaZ8+dwEHl1je/TH1h\nmWtXrpCrH1FRSxQrYJZLTPsdark2wWqGnqui5QsstmFp8RTj/k2OogmNjZPsDx9w+tF12s0ir291\n8e05XgQ5HRQ5xndD1KKMbtZw0zErj20y2r/J9y7b6KbOyTUPJZfjk88/ym/9zhcZzqbc3OmxuFwn\nDAwqRQk/8uiNxnwtCzEbebzRHMnI0V5s07vzEM8xOffkGaqugZD6/Bef+cfIYkaczrBHPmkWk6Uu\ncyug2lhH12VEQSCIVObDHZY2nyCTVOLMw7Nd0pyALIm0W8uoqsLf+/v/iMlkTre3Rd5cJksm7O3c\nRTfq1BZX8a0Ow+2rrOee5mHcJ/bfxOm57Ha/RBQ1uT3aQsrVKOt5JHmEGGxSrh4jVjQUucqzP3qc\nyAnZHd3BUE5w9Y8Pvl+uwA+wDRMEIScIQuHtexP4KHCdtyLxfu7tad8flffTgiCogiBs8A5ReUkQ\nkkkSGREKMhM7QdZkMjnB80FSI4RYprZ2nEl/h9BN0HMysgp7u0eARBQE6MUCuXqOKPXQSjq1cglB\nK2HqKUooIgt5KqGMLmpYgxChKJI6EVZniqQJeFFI756LpAjsbO/wf335d3n59lXevPMKS2tlNF3C\ntRySOCbXrHOw/4DtvTGnz1zi5Omz3H3zCm7ooogSiq6haQb55gKjucVKoUmShex0DuhPxmwuLvD8\nxU1Gu29SVUSyLCGe7VISYxZKBt3uHYazEW9+77vYUxetWuLg8Ag/CiiVFzDyZQxDRpFKVNptFAkW\n6suUGg02Nts8/76zPHbpNMfO1mhuFpHyGraXIsgiYTpn4ewpBgeHRGGZue/TH9p4acC93R3+53/x\nh2SKhlkss9Iqs7dvUa8WKVdavNHbBVUhh487mVNrVVhZruMfDWmt1jj3+KP0DjvUa6scr5+j191D\njS2i0EdURYqFEknicNS/TxCMsa2ALFOJkpBiocxs3qF/sM3R/n1s16N78ADbHgMxoiAiihKyqKAq\nBWQlI/YDFpY2mDtzbm19j6vXrnBl+kU+/uxfwfNu4w9SvB7MnIj9gz0yb0S1tMHGwgVq+kVEs4Bh\n5DmxUUM3BR6OrjNNtjhzJsff+dFfeEcv/CArSwv4nCAIfzb/32RZ9hVBEF7lh4jKC32HKKqQhDGF\nvM5wGDA87FFq1AnmPuOhRb5UxfcsSvUWzmiEHCuQSIx7A/R2iak/ZawI1ESJ/cEYTTZJdQEtLxPZ\nIaGiE7kZnXKE4EM7l2OScxCp4LtzivISWTZnbk3ZG3TJejquGbK1d5swjfjNf/vrfPiJ9+Onx3j9\nyi18a8Z4ZNNaaXL39i2kzGM2PKR97FFyRg01ZyLJOkngo5s6+aSMZR/RsUY896Gnyew5t+9cxpFM\novg1ps6ENIbtmztMv/gin/jUj7Cztcf5i09hDQbs7u/TWllFVSVm3VtIeo1edxetpLNUf4z7e9uc\nO3saNReSkw2+9Cd/RCFfJLk/QKrUCMWEk0+sMT+wqTYXkWsxagcsP0KRdFIh4taWDbGLrEhomkIc\nSRyOQ546f4x9r4eja6w063T9OUEaUK2WGfkD3H7E+y4dpzNL2O3fwnJE8mlG3cjTLhS4u3WdxaVF\nysUKqZBh23PqpTJxomGWSljWEG8yQM/nUXMN8nqKUSrTaLa5d+tVYn9Cub0Ob5+zGw8nvPrK16jX\niiSxzWAwRFFN7h7cwZJm6MIlvvDNf80sOyQIiuRzb50nbLTz1GsC+fwRR8MjlnJnGFj3ycnHif0x\nJW2Bil5FlQ+4/71d+mvvHDnx/2uWLMu2gYt/zvsfKiovSRPiKMV2bFwBmssrjLsDcqUM5JQwFslr\nKZOBhyBpJEKOmT1FkiS8yGGlcYZMyki9GTuZSF40sDKP+U7A8UvrTCZD8FKsXIIgSQQ4iKJEQaoS\nSj5SwyBnyMwfeNRONAiiGWIpZrHVJEx97r8+5lAP+cKrL/Dkxik+/PGn2X64zXKpBamPM5vy0vde\nYbndxAtCBFEhiSLspIui5hBVBUV6C5r99MWnQDhgEIV8+NSHuLf/GuPxlEwBxSihHq/ysZUTxJ5F\nSdLRoj7XH9zm/R/8GDdvXaXvzilVS9SX8kiiTBwJZJrD0mKVF1/4PE+95yPcG90gTsbUqlWspTXc\neUZaddhsLXHXvYdUHpLXSsRqAb0h4rkOSmLQXjKxrYxCtYAUa6hljcj1ePXhfTw1YdzdppSW0Rd0\noknEzB2jxwpyyeDGvS6ylJHKEYov8aFLP8mjCxvkSjlWtUtE/pj7918nFXRUrUxOE3F9C/telzCI\nmPZ3UQt1rOmrbF54kgsb5xAFBVU1SDKbwPeAMbpWQMDixz71s/zqb/wDmgunOBgfIGg5EjVhmAyp\nlh2caZFc7jEeeeQJjNUDiPbZ35lSWr5PXtVoNmoYyYjR7RFzzcEXZ6y12vRHCc9c+jGq1ibOg9vv\nqNl3j0gZRPiuQ5JlkCgYukSYFwl8myAKKIki83FG6EfkS3kEOUHwElJVJs4kHGuIFbkImkgiCjhu\nhG5oFB6pctTtQeqhZSqCoiDGPpmlIDd1uv0ux9dPMpHvMJ4MkEMTYTokkOsE0zn9u9sc/+BJimfH\nlNQ8g52Qr3zjNp/88RbPPPUMN27eIEolDgZDaqUie3uHtFstNNMk8UJ0VSbKJGoLDVzHQRFUYISb\nxpxceYx9q8/64hJz28fa3Wd0FLD2WJsXL3+X5fYiadbm8IVXWd08Rb+3j+/MmXfv4oTHqa7IKGqe\ne93r9LZ2aGmbPOzepH3Q5uuvfg5TN4ms+6xoTeKcQqKFxKnN8hMtRDvg6JUOrfUaF44f5/aDOwgO\nrJzfoORLPLT3KOgKW7fuc5BFFEsmWgmUqIKWJdSqKq5apDPsIRkGWjKjvFBEzGQGAxdBkHnhxW9j\nvl9Bvz+ludxE0vLs7Yw5deo4h4NdTNmgodaIQ5tBf8Ty4gpLm08iiQ63rvwpw8VjFIs1ZE1HlZu4\n04c4ahlVnSLLMtuH93n2gz/LC1/7HQ7SMUUx49rWmxQ3ZTLJZlqSKXkKRSXP3hsy4/KcwdYBK5PT\nPDS3Ob7WZSzvU9VXGYVz3J6JubDJpy9+gqVGg6wsM+j331Gz75pZoiQgDCziMMYNA0QpQRANrKmD\nLEuQT5mPbSTdRFBTTNWgP3cwNAndrOAFEZpSplhaYGZtoQtglMpMp12URhEe+FRLRaZ7A9bPrhMt\nuUz6PkWjyiQYkYY1vGBAYbWF3S2gFiWKzSK1RxYIlYwTC2e4fe0mxZNFRs6cz3/2TY6f7HHu8Uu8\n8p3XSMQCK2ef5omLj9JcWiH1XCRZQ9ZNpEzFmri4dhcvDAiVPsFIZCDdxp+H5CrHkBhR09aQlw9o\nOybaxY/gpxbrBZk4XiRfLuBYh5RqZXzpNEkQsXPnCp39A8xNmV3V5s3BV9DKItZ8zJE146nWadLY\n50C1yIZTgnxKuSgSizl2r+xhxRlrawq/9+Vv84lPPsdk54i9oz3iLCVWZIZZzEHi0a4tsXx+md79\nQ0RlxLyvoJR8wjRjoV5CiGMG84SipTGcjUjVDK2kEscyH//wR/n93/9tKlKdrdde5N7uPked+5w6\n8zheHKJpCfNhiOt5CGqN/sFdXn7xy4SRD8bnWWivUq0uMxzvEsQJ9UaJXK6ILIucPn2RyXzKpV/6\nx+wN55hqyG+9/L9y4/CzeJFM013BClO8+TZ5VaF3bURBamHZfZ6qvY/Ld15nc13j5NlNvvD5WxTa\nBlceHHHBnHDj7pAfee5ZyvW/gM1fUeyRJTVSISEIXFIhJZz7NJp1nNhCHAYUCxUEEtx5SGiFmMUC\njjPAsVxqK2XIyYiORVXLISznScK3WMKCauM0CszSCYWnQkQtxup62M4MqSghhinzeUJ7pU7qadTz\nBjfv3CP/+GmixCTRbPphn9FOyL/4b38NfsZC8kOCOEWSND763scpFMvEgUsqSkSTGYIqQSIQhz6q\noZFmPm/ub6ErMqPZjHJtiUm0TzMQUZaOYfUgUI/ISRqYb/XI73UGnF14H2KuQKtYI2w1SSKf88UG\nWSoxcWw6mcW3Xv0KJDmSkkx2FPLl3a+CJjEbh4yqA9bma7xeehkzzGPlm2jRlPrzx1B7PboTkfOP\nraCbUyIzh58ITPwJWhTQ308onVwknkNn2COpzqgVF6kVxqi5Bv60S8cN+OijH+XW/p+S+D7H10xe\nfWOOUjyipit89Uv/J6GUoMoRlUYbreuCkafbmXDsZJN71y4zsWIarRYQUTvxGM4Xfp+5M+CbL4Vc\nOOYiqdusHztBqVwmi0Mce0g+XyFJE0pmkSSNaVUk0rjEpy/8NPd7r2G5IyaxDW7ANekFFJYIZBOp\n5uLPROx5TDYxGRY75FZE6ssKUtjmyZMf5D1PfoDBnVt0D/Z5uH3tHTX77pklTHBCiwiRxI8RsgjJ\nVBEMifDIZ0qKmjPR8xKzwyn5YpU4mmEoJaSGxqBzROzDMB/S3e+T1zOEioQQSkSOzNywiQKPcmCS\nFQUiTWDz0iaT7pggCxCmLq6lUat7zLKQymoR1w8w6yrTgxElbYGXP/857NkeTqaCoZBHIQojMiHB\nGo2IySiZNQRVZa97wNrycRzHIcHHjSALE2QjYZaFDPZvk9ZzrNaWef3aNyim6xyKPfpjid3eFNUw\nOX/mJO58RCr0ILJJ4hAhsJjPuuhmBdPQeP+5J4mlMd+6+l3ycZHtvQExIgubJncnd9BnCd8TDxAM\nBV0pMux3aOo55oN9ZFUmk8eo1SW+/dp9agtV5LyC6Rjk0yqiOufYiWPcP7qOJEg4vkWtfJ6BlxL6\nfSIDjlfavNr5GitqhevWgHrhSZbO3iUvNRgOXV4dfo/pgcaxWotX3niNYqVBfzplHPfpTYZkcUC7\nVuVbX3uZk2cfhRf/CDeFQFfJJxmvXvkup46f5p4/IW+U0HImm2fOQZIiSKDrBUgcAi9BzRUoFFf5\n68/9T/ydf/Rpjl3wsCcVNDNH0RT4yx/+Bcx8C2e+w7du/wHjESi5R7hh3yVvNDi7+AQfePRxsjDC\nTWWmg312e9o7avZdM4uYCWRpjGfHOKFHXtGISXACF4EUP46IQhfJF8gyCc3MEXseWQrlQpHu3KFs\nlBh4HqZpIoQWgS8R2BFO30cjJVfTcHcihDRkOpwTjAJ0SUes1JGjfUaDAXq+TaO1hhwf0B/N8JY0\n1Ejhn/7yP+Rg/zpSmmDUl0kcl1SAQmWBSecIHyiVanhBROYmRKFInImUC0XG1pRIznBdA0faQtcz\nthOL4x0VS5Ww7gRMitdgqcLGegujNyZLIx5c38LQSuQbBlQl4jShmm+QJRF+NEeSinR725TkGpVi\njd5khlSu8tjj57hz9wpx6uL7IpKekUkZdm5KMV/hKH2r+7AqNzlZr3F1eoixohCnFhWxiV6Imff6\nhLbH9777Eu2VChtnT3PU6zDZ3cadexQMHdeeI2kehYUz9Jw9Ti+9B88fUdDhcLePHMjsbGU8dmYT\nK5FZXFzj6PAhkiTQmUdI0ZTHn3mSu9sPGU9T3vijr4OasNYq4HgJm63j1DYqHE1mGM6MSHhIvbiB\nlqshSSGyLGMaBmE4othYw41jBKPEUiHH//YrX+fX/uTvsna+xK07NvFKQD4T8MZD3NmMa4M75BcV\nqq0nOZzM8Sch5953niAZEDsidjAmE2Munqq/o2bfNbMkGcRhwNSyif2IyNBI5gKpkhGmHkkUE8Yx\ngh8TZDGyEqOnBsPJDNXME0YJvemIRIupFBc4HO6jBR6BnyA4IgtPLzPr7THPZYj+mEJS5OSFRaax\nQ9aLKD+yie9E9IZ7yI6IdaDw3vdvIsdV/vNf+ZvceP06nfEcIZYxKzaPnL+EkDioikKp2aSYJPSH\n+8hphpqLMMyUhzuvc+78B1DDjPHwAbO4T88ZMLMTNEWnWl/gzu4NPD1BLahYTg+/P6C5WEHJQp49\n8xG2Hjxgikxv6y5pGDCYHbCwtk5NaDLD4pknPkBrqczU6vPV0bdZWBfY2XsNRUlZO7FK92CEWlIo\nKxHjUMabWei+Rmn9Eu7gIXdGc4K5Q9yO8I0cJS9ArgiU6ovIhQlLWkSnJ/Lg8n1U3UVQl0GO8DOX\n5dVFnEgkdafElolZryHJDqJXY6YbdIdTcBPsJZ/XH2wxPDjgYDzFSCRmqctCqY5t9RjObLYe7iKq\nEqIbY1VKHI6nrG1GHA3mnGisUSjVmIZjYlEkERWmoxFxEiDFAb6Y0gh1iuUEzQ+4O9lGjGV+5pm/\ny7V7LzJpPSSZ7nB163VyepNyroa9ZRDWIWr0Ob/2l7jq/Gv2776B0c5TWWiQSSoLS5u8cf3Nd9Ts\nu0h3sQlin8DziLOEILJBTPGCGN9zmVsuWRoShwJxkDLt26RxwmxmE8cx7eYmai3HNJsiqAnVSp7Q\n0Sg2Tc5+8DwHg7skhQQtERnvOyRCyM0bb2J3XQ6GfWYTC7MmIcQikRwitAS2Hoz5Gz/289y9cYtr\nb24xndg4jsf+7g4vfvPbmMUWyduHp1NJpFxsYdYaiLkag/EuxVyTr37ld7Dmu9x48BLtSh1BMwgn\nNrEd8YZ3G78iY+s+SRYyn/mMJI83pkfcPDjiq1dfYJC6WNaUrWCHI8/C2DxF/2EHS52xsbjKoDdi\n++5l5pOIoraBjE4wDsgSEeegy2algTov4U4UarJBvVkhLPbobT1gPg1Ze2QBpSUhaTGGDa40I50X\nUIWIjcYSo8BBk1KkSo6LJ9/L2UUTa2qjmiXqukEzENk7GPDXP/rz/MRjn8KfpNw77DOdTlGLBZS6\nwHdvvcr+3ha1Ew3m8xGuMqV0PMee3ePqwV26wy5BLOG4c+w45cHuIc89/wwHRw5GIY8T+hwOXkAL\nbLqjIZZnk9PzmIpOpClv/aZPIsLAY6v7BjeuvEHfTegND3j/Ex/lb330F5jZLbYme+z191g7ucHT\nH3w/J5YuceUP7/DiN/8Ntidwb36IN+hx7daXiJOMh70OJ0+cfEfNvmudks8/+SiJIDPsD0CVscdz\noixFUXNE9pg0k1CUDEWSmc1nGJqGCEwdC1WQ0aoqveE9xpHLaGghyioCMmu1RzgY3GdNW8IfS0Sm\nQr5ivlWjkQVa7RZuGlEpLeAGNpKQYncd5CB9nISSAAAgAElEQVRB0lWWsjx/9PkvcfTgDapLZ7Bj\nKJbyjMcdRr0Bq2srvPbyt5BlFUXTSFOJOB7zsL+DJusMJ2N+9Qv/FG845UA5Yni3j1IycPenRH5C\nppsIYorvg1QuImYVorlDJqqMPZtEi3lwuEW7tspecMSDuwckesKwOyOXyHipz2Dc5dWDN6m2NvEj\nH6mRkG+1KTRzOBOXsxunWD1xnMPxdXQkcmadQPKoNRbY2x+x2FJw/QzbS1jQC4ztbRZX3sv25HuY\nioxSrOL0u+y6bzJ3E+q1JeLMJRYT7h/22NDOcP7YCpXWEqfXn+A73/0OzjggclyMcouP/9xzvPKN\nWxw+7LBxeoOFC+vghmhKwnh/ymTmUyga5DaLmEWTyprBtddvU6qoTJwB5BK8Ysx0GqNkMcVSiVK9\nQXfUpb2wQrHSII0ddqyb3Lt8SH0TcppGfWGNVbOCIMF/8PzHeeLkBT731c+CO+Xr33iRXEumuW4i\n5VTWy+eYTxPOrD+HuPIenr/wNMvNFfScyW/8xr/8czsl3zWzfODpM3R6M+wgpFAwSMKITAQFkUyI\nmY1mlCol8kWJ0dhDNiBIE5IoQ9be4o5FeokH+/fQJxnNQolADqlWm5SVBKGaQxnPiRNQFIm8IBBI\nETOvT41llhSB7niEF0wJfYn2sRqLfptb1x4wtAOGk5Q337jMoHODVnuRemud+WzMZNDjc3/yWYql\nGtVyjd5gn95gyP3DOxSEEr/3ym9TpMCcOUf9MYKksGg2OdqesfTUEoai0t0eYM0sQt8mEmzknIoo\ngZzXmEwHNM0FFEUhNUPWT5zAOZzgpC6v3R2z3d9jnOuRmytkbg9XkdDFIkUtYz4ZsLHxI3RnV4g9\ngdBTGaYdmgUFUVGRxRFCDNWySaBGFGshiSZyvPkk2/uvs9Q6xeHhEZISk1tSUBMVX/FwsrtkNMmZ\nJl404D/58V/gmQ//FJHd472X3kuj0uL+7CGSkBFKLj4OxtihdKzEuUffz2T6kL5vo4kVktQjv1Ak\np+TIqSn1c6sUWglCYJBoAr4bMvH3ufD8++hdvUlpJY/tBAiejevOGfoWV998gc2VBXbHHrvWy3z7\n5deQ9JiiAN3BDmN7RLNUQNXLlAoav//iKyyvbrJo1Pn4pb9M1NU5kTvGX/34T2IoPucX18lIyRk6\nX/vTf8WXvvq9v1is4//6P/00tx/MkESFSs3EntoUKiZZIiDLGd48ZmWjTbOSY/9gSr5oosgJfqRQ\nKuvkSiZjMq4fvYFPiN07Qiktsrt9m4986gO88sJLFOQckSSTJiMmDCmqdSS5hJFv4HsB9mRGQoQY\nZJhlCN+UiLMYJZUIErBGMxIpRhUM1jeaSIisn3yUuw9u8ZM/8ZMMh0OSxOfq0VeBIkWhxhuHW7jW\nhJxeQliwUEON3gOH2mKJw7tdcoqOWDTxhjMKbRVjs0Q5K9N3+gizALlmUDMMRpMxUSixUW+Rq9RA\n1Wkvtjk63OGw1+XejftcOrbO2LKorC9gVlUm0z7lVEROSkQFEcsbIWs+unQMKbAQ6hH9joMuqGSq\nQDyxEfSMABXmFmalhOPPkFFZaDTpOz1kz2T1+Gm627cgqTDtdyhHBZZWqvztX/wfaFTaXLv8JWJV\n5He+9Fl6Qh8lNlkumaAo9D0b767NWO9SzldYWTzNzRtXECKFxJD5xEc+w+99+ze5dOkUOzd9jKZE\nTp1x71ZIy5Q4tdDEephSblc5tXSaL9/4Du89fpzNWoMvHP0hC+r7uWFdxvRrXGyeImlLZIHLEwsX\n6dsJdgDrayuUqyb1fJVUzREc7XPz1mUWTzxGMJ2Syho5XSbWwJ5M+Kmf+Ft/bqfku2aWf/if/TTX\nH/ZRgWajDpKMHVjoah45shmPHVZWG2wst9jr2GiKTqEi0B+7VAsmhUYBG48bk5fZcW6R+GVy+gqZ\nbhH7EtJYxjcjxrctjJxKVonZNJtYRZcF8zSDyS6apuH6IfOhhWRKZEzpfV2g3iySOgKpmjDpzomU\niFzBpL3WwMwUFlZWyFwXxTS4M7zBqVMX6IQP2bs1QFdVRhOX+nrM9ms2jdM1RnctzLZCdD/CjeHs\ncwtM7RmWPaVxeoF0luDMEnKlGE3R6Q5szCxFFE268xkVSaV97hijvW3KtQJu1MMay6y3TjK3jyi1\ny4yTPtEk4cTKeYbDXUJRRhRiegdHKLqJUc2o5zbwnXtYsxzlmoahNShVTuHM3kAwc0ytA4pGAWs8\nQjaqDIZ96rUF5n6HkqlTz51ha/pdBt9J0DBY0hf44NMn2Hd9UknBjSfEORFL0inkDjB6TR5OHlA2\nl7jzcIv2mZO0VBln4jCKbCwn4sce/wD5+hIvPniBRjVjd/cQd1qgVU4p13Qq1UcZ37/D5G7A8+99\nihdm38XNHBpIGLUNgtDBkW8id6r0pX0a2Tp6c5WapLCxehp/N0ZdktDkCoZkcnT/dTZOvAdxFrCw\nXOP1l19hlsK505usbJ5jqsh86qmP/8UCg9uxRJJkRGFIFEYo+RRBEEiSgCQIEQWB0AuJUo+hM6Va\nKSPME+IgQiwUUTNQ5DxyWMDvlNlcXsURfTKrQmSHjGd7lMw2xYZKZa2CN5lz/fCQ9ajE3O4RaiLZ\nbEoQ+mhmnvl8RL6ZIJ+UmXVGpIGOGmtUVsoodZHhboc4KeKLKbfuXWYSQeJ5nHzPJQ5nO+zvHWLk\n8/hejFqaM9iFY++t0Lk6pdZo40/3yZ9pUBYEYlEiFWPa5TZCmGHbHvm6QeLLzKwpZiYhlAqoccj7\njm2w3e9j5ANkVWYezmkUiljBhI59l0zzUUIFLTKI5Iib2y+zduIMipOhqTqnN55ByyJu3L/GfLjD\n8eWnmBc77Pb3qUh5+ne/TWbOcPcyFpsmR84Qrx+iNA6oVY+RSg62aBMnKWE0oKw0+OV/8otYocfc\n9SiXy6xIBl4Q48QWN7deotAXiYMqE4YouSKqBvXjDbRcyOGhy/GNBVryCbpWl93gDtLhHuIs4Gg8\nxJk5BEmEVnof86jD3ksv4lsyVV3kJfHLaMsyJ8xLzOcuE6+Hr2xzYfPDvHL0Kkq+SaN06i34+9xm\nZ/s+kZYh2kX2Jy+Q+Bb17DTF/pDxcJtXj3a5e8fjwvtOcHfykGjPw82K76jZd68oGSQoCIiahOXM\nSMYRRn2BhBjfs5BkmRQJAUisGL1hECd9iEUEElLRwM98tFTh2NJJXDlEycpocoi+VMVLE6LUI18v\nYh3NMWoVamaBwAA37mHtzDh26jST/Q6abyHoIIg6y5LKMOeTqCLu1KJybIXpfpd8u4ATT+j0QzRV\npXCyhHVNoN2uMJjNKVfyJLFGIO2hx2UaJ1Y52L9NaXUZu3tAWowY7w1onVng6KCHIrr4io82LqGs\nDEm0HJGXIlRMjLBM4AikUkjeqGFKAftXZ6ycLDMRJeww4GSxTGFBQQgKTNyESrXIpHOPsLbG4bV9\nqs089doyl698gVCKKMkV5mLEK2++wuqFOoVmgcnkAbK0iOfKqJbJ1rCLaebI1AAlStHFGVYgoWUL\npK5GrdpAUFT+x1//ZyyXauhakU53zCNPHsezHchDuVznq19/jWMn28yyMUqi4SohyTxhfWkBb9OH\nZIRoLFAVazj2DqPJmPZCDdeKWKts8GB7n3nvJma1jF8TkU0XaQ1y5pzAbTIbTdEqVRJhQppV6Ixn\nnD/9OGv6GvPgHl+8dpVjK2cQYpvtw22OPXIcMZBIbJWj3i7bt64i6G3aywKXnvwQ4ajDXAm4zh2M\nYOMdNfuumSVIIwzTxJ2OSBBxgojYmVIq5AjDFEPKkJM5E1uBgkCSzcl8/y2Wbhhi5vPsjfZo5ptE\n3iG1UpX7swPCvIY92qGopQhyjorYJt6QGE3us3lqjdt3LyOmRVJDojOaEQcesa9iFHTsvYB8LmSh\nvsL0/iHN8006nX3EEBRZY+JF5BcMBB8EWaZ5WiHGI/Z9yq1l+rN9cmYVIy3T6dymmK8i5H1kC1LB\nRJVVgqGFkRfQNZkkElhaL5KkeQ46B+iiQZJkeMmMXKlEsBcwnk9QSNBNi3lPo7QgMxr32POKrJRb\nRE5GmHQZ7t9GEgScaYhRFlBzAnvdHrKSokYQKyn1VomoFLGQX2GSWGR+QGabPHKyzTfvvU79UZOk\nn+HbCo6Qsic4eKnN6fWPMLYe4MwP8VOPWnMVRQ9wZYnWpTXu3dphLk4paTnm7QFnL1QRdAlnnIfS\nENwGQjFk23qNUA2ppUvo1uvMLYFJOKKuXcAOO6iCQqLbXHziNNsHY7Swz0ojxtTPII/HTKQCia1i\nMeXu0S3WGxeohdCd3WYvtXl5J2Ulv0Cr1mamvYklnCVQPLyOjxBFLFVPsWPvcenEeUbCnIKyQijt\nY+s+R+E2BaeFOf//tF79u/Gu1VkmnSGaqaHmIBJDTAmsvkXsOSALyHJCFqVY4znMfVItT6aISNFb\nhy4VIUNUdDRRwYkn1M0Wx/InyDkyaqFG65FTLC4uM5UOcId7qFmVWe8+YQheFpJMQtLpnErDJN8s\nkssXMFpFSmvLDIIDjLU8Xm+IrBtoyxXCXPBWKKqX4fkuywuLzHop05GDlwbYkwHIBYZ3fKKpRaVe\nI7Ut7KmDmqtQXlhk6byBVBcR5uD6KYnmI5kCPeuApx8/hparoihFHCemphvUNtZwFYP9oxEtrYLb\nH7L92gFpLiYauxw93EEtZjhpiqzlEZISmpJD0TfxFI/Ih0AwCQ2J47UTyLaPM0npWw6ibdAfOzjO\nfa586yqGUSAYhsSOSCor5CSdbGZgeDp3XnuRcDpk5g4IHI3TjTMMBhmiH6KNYpaqMhpFzjy6iTd1\n8PMethOTEZBo4CU6OnnEnEDUqULS5OGhg2HmOb/4HKYaYaYuk5lIvzcmtDJU2cYI2/ScGQ/3dum7\nDoPDCUpzgR/7zM/ynoXzjKfb7AnbVIwStfyjrLUXOBhHeL5NPdpkuVAgVzC4NTxie7hNaqesljS+\nd3SHnK+TWj3M5gZSRUVSy9y9cu3/Zu69Ym1J0/O8p3JcOe6c98m5T8cz3T0905zhDKlhkEzRkiwJ\nEm1Dhi8E24ANmzZvdKEbGoIASgBlE7ZICqLHFOUZ9WRO6Jz79Mn77JzW3iuHWqtylS96ZMs026I1\nsIffXSVUXbwvvvr///u/h7766Zr9qWUWz4+oiiLHzgQxlvA1AzsDvhOjyxK6riJaEsPYJV8sEHUH\npJGDbJjIpCBG5ASDsZ6SNFweHD9GEHTcKEVOHRw34vHbjynOTdEfj0m9Nr7rM44d5qs1kqLMSWMb\n0S2yNjvP7bsfsLg4z/bmBlq2RLvbxZNdRM+kUi6jTU/hFseQxoi6TtA9pnDFwozG9BURwYsQZRMt\n9nE8ASHWyJfrWKlC9/CAQBURwwg/cIknAmmcUizUCOM+ai3h9Qf7+KHPrDVL8XyWcBjTPTqgnFtn\nJm+S5ANQ86T9E55b/4u8aX+fp7VFPm7cQVYzWFqVRuMITYGo2KWQlvAYcvaGTOs0ZOS6nIohzjHk\nJId9d5t4MCGRakyd19m818KMTZBU/L7DtVsv8sEffQNj1kKZGAz2x0zPTCPIPbQZjeDjPpX5Vfph\nl9XKOhP1Qzr7Eq2+x2JWZaIcoocqYz9mEvfIFXySYYFCJsXQEkxbQZAluqcbDIpD9ERGkRzKJYPj\n9BA56LPd8tHLNey8T+BERMOAg+8+4h+90ySYOkVqwsKlPMWKyOTQJymtsJYmZM2QVNURBIXP2td4\nkx9QzJ5B9GSO+8eYeZOe0aY8LvDwwz/mYJSQy4UUL8xieJ9uiZ9aZglEgWDsE7kpcZgynjiMRw6y\nnnDSGxFEYzRLZdLzEVKZKOrBj1duNV3FNDP4YYygaiyUV5lEEj1nm9Z4HyWXY3DqU5mt4qcxiqLg\n9HxkYuanztB0Ryiqh26VKJZrBHGCUAgIfZ9CxaYsq8zPasw/scJn/72beOMej+7cZbh3TKQFaKqP\nMw5ZXKyxd3QMhwPEis5od4uFm+d45uwt1L6AreqM+yfMnFvj3LUVTCnPucpFKrUSZ25ewvJkAjmg\nJj+FYFWpzc7hBAOIygiKzvoTZzkZPWLq8ixpFGDhsVBa5+OH3+OlS89y/3CTvhrQ2+rjxyGWWYIk\nQZJDDt+5z+xahscfH9FtThimO4iTGDOjc3pwjGYICFEWrzlif8cBUaXkS7zwlS/w1//Of0YhVVFz\ni6hyhfXZOv5AIYhgKAW8e+9bWDM+D3uPYKxwOGxiaHOIus7zK1exkgpOQ8XIZ8lHKgIe7BXxm1my\nms727j3S0GRr/wGNYZeLgzp6PCGXsRk4EhIGVnmGhUvrSEGClM/R2PeJ632mbiaUzwYUg3mKMybj\nkcJbX9ug0QvYvPOAQfAGu3uHPG4fIvdbKGfWcQ2RbHCOnvZ91LyHUojJZqdZv/YiR3qLqTMlJF0i\ndV0Ou5NP1exPzSxiqpMaKb7voxgKQpoSRtBpNjF0FUVWSUIfNWOSJAqTQUwq+qTOkDQFQYipFLPk\nczVswSYYtXHGHrZSYfveIbEbY2kyeUNGFnUKUypHxwLjcIA5CpC0DPlyHkHp0nNPKVcuYWZzlOuz\n+EZIbJtEUcSP/vBfQOaU688+gz1bI5caNMMjJLlIKZ2nvFYlimScO6eYCxJx4PHevXeJiwI98QBj\nykTLicSyyVBosB83qV0uUZUkVteX6TpjxsEBljtCDXyK9RxResDi2jl6Q5nl6jMYikzlzBUyZYOx\n3yFQBX7w/lt4+ZQ8JtWlJVK7ycDbQrSLeL7M4q1b5KwCK7NfRFOy2OIZ5I6OUYuQSylG3ac6XyWe\ntsnJNk9eWaD8xBOUlRn2tu5ye+/7IA3IyDJH7UOsqka7d0ywb5Iv2piZMzgHMsWqztbhY/J1CyXa\n4G76gIyZQSvFuFFE21TQ9Br6vMvopM3RyCU7VSGatIlll7E0ZMQYvyfiTXSmsueYDEes1m9y2tgg\n1CYMDpuce2KOnLhKM/Z48PgRSrbJaOgw6ghkZ1bp+nvMLcxytK8z6oyZL0zRbYX86KN/xkp+nUfx\ne0ioWJZNPIhp9vd59eErLFp1dMHDCR2UwGR5ZfrTNfv/oz/+L6GKY2REZElBSFIUMUWXVeJYxHF8\nJKDf7xInCpIcYWk2UihimAa6LOJOQsbjMXoiIoce07ML1CsLCCZkMwXOXFxn6Pqkikm2ZCDnVdZu\nGETxgOJsDhkZgZj9fRdNTTk+3eK4ecjhzglh6NHqHJJVTKqZKiJZJEIKhQKnwZjMWMWPxuiZkOdX\nX2J59RLWdBFJteltO5RmcvgTFz35pE3qqNXm8R+8g+RZ2EpEu7PFg6P3edC+TVbNcXqvS6Y0iyWa\npGqEasXsHL1KzuoxOnnIceuAxxvvImoK2ZpHRo64tHYNKZYo5hd4Zm6RmeIqucWI689fZX7GRlTG\n3L3zEFVyMUIDvZSDaoihm5iagbPnI6saX7j1y8g4bD44wbYkPt58m3bvAZKus/ODBo3DHs2Oy+r1\ndRZXroE8on86YefBPSQjodMK0JQsO4192opLARNR95hEMWYiYCYuerZD6IRkzsi4Q5fIayJkYnJR\nltwoT1tMUeUSqj9g8/R90sjnw8NvUajnUTQVQVNo9HuksY8W51hfuU7OmmNmdgXZKFIpzWIr06Se\nSsbUuH7xKSxLYGS0yCtlLueeYDafJ68tsdtxyNoKhpHQE07pjU+JJIucbFOZq7B1+unbin9qZhEU\nHd8foRVs4mhCmCoodoykqnihRxi4jFwIRskn24bDNqjKJ4N8EZxgQOD62IaIrmaIxwHjsYOlZ3Dc\nNvc/vkugCYTukJJZZbq2SKIWqE8tE9oCSTgm9NtYiUirOUF1JbLaDP5kSBi5+CcqgmWRPbPEhXPX\ncKM+XXYhGSFmSqxfPM/O5gGpO6QyYyGaMVY/pFS2yKsGM8uLJK5Hq+Hg+x7Vq3n0qkqnM2DYjnDd\nLHrOQJVzrF16EjMZESh9ynqZKC1gLm9gzJa4cf4Gi/kVztZWUMM8lcIUjqbx0Vs/opKrkFE7fNS8\nhxfcY6F0kZ173yXVI6bsGeyyRKPRQMzWmHQPqNs5srpOrl4gY+XIWrC988f8xV/9dQJVouftMXD2\nmUQ+tmZz669dRHcSrr9wjSRw6Y1PmEQaw3EDLc4RezEnp/uI6GjKIp6j4SQ+B902tZKBryqMJiKN\n7gCzFpFDQUCGaJrYzzI1vcK5a2ew9TKHExdfNTH1HNOFZTRhkWDSpKAaGJFNMgbLLlHVLDx3yF7z\nHgVrgXx5hJSLiRKNaqHEfK3CwBqjEmFbGVQjyzuD9xhKHsOgSamsockFxqM+QpJixFnMsYNZyJBK\nAZeW/hzyWYoFjW5nRNFWiDBISVAkDdIQRVBQsyrhJCKIfMI0YOIkkIAiKUiagm0USYSIbq+FJ7uk\nyIiCTRAl1FdXKczWCCYWrd6Q094J46MxF0rrpMMRvt9EijzQ8gRpQJKkGMV5GkfHKJJBNlfFWl6k\n/WiXpchn39tFECdYk4BMUcC3PDrNO0xVFjk8dpDjBDlWUVam8LVTummMoA2wFYPZokEoJHhSQG9z\nhK4HWDmd4mxK2HNJfWj3HjIRhsQ9gaPTDproYHVfxNILGLrIkXuHY2+Htvsxo9inlrEoTeVpDrr0\nfBXVrmIYWWIk1EKV4Waf+7t3KKs53LTNejFDYWpM3phlOjeHmtgszK1xctLDkUX++bd+k8tPztNs\nHtIfj8lYBmq2iJlNmPvcAq3mEAoao5Mu5cUqSjaLMBGwC2US1ebi+vPM2DNMW+dgmMOnz8lJhOSL\nzJRneOHCS5x0ZfzAxYgtIjxW1pawslXeePNNtrqPmS7rFGYy5Es6URoy8I/xE5MgnyIIIUJgEPcc\nqrkZpuwFol6Wne2PGI7v4EbbVHOQljaR7TH12gIHJ8dE6Smt8QZOT4KRTugKpF0RKU6RrFXms3O4\nYotYj5CjhMid0I+TT9XsT80stqbjRyH9VoBtB8hGgSgKUU2FNIgRUgE/DtBkkXgQYRd1DF1FSGMk\nUUVODOqlaXrdHnK2ykLBJg4cFLdHmhxxuPeAQl7ElC0kOSZX0oiUHhgpZmGesRMjDSNSX0ZKTAw9\nhzWnYNYNkBVEehhlg41gSDoaQiLg5mRsW2TZmmJAh4O999kU77Lb2CSRXYRUZGCkSFqP0IMkHTPx\nBmQTH8NQUO0BRdFGYkgYStSWXJqjbYziiGbDR7ZMpqcK9E8iJu4GrZNNvrH5VeRggbEfUVQi2kc9\n1MIYR+ow6PbwxCE97wHNzZT26TGFSZXySoU49fE8i/W1GT7cv8+omccL7tIYH2KKDluP71OvzuGP\nDdAEbv/RA3LFAqoc0xuPEIYe3shk80eHBOMxjfvHXH3qBi/d+gU0LSG/ajN2XbSciJQmaBmNTL2C\nKCeMRgk3L3wWMzuNrNhsHu1zoX6D/MJl1p+5Si1XoD8cM1Fdzjx1kbn8PGGa0t7xcfohebOKEYUk\n+ojTnRa6UeHs6hnMxZRWt4nt9ZgzMlTqFRYyn6PsaRTslMzpGrFXRkw8MI7xhRRHDondIWVTQJAr\nTGWmmWgRehKwv91gMpjDa3fpTxyOvUcomT+HA3xv6CBJGQIhxh8amJHD9OoKE1dCzZt4JMipSJLG\nCMSMGgPSJCFjFzBUmTSaYCkaiedxfHpEZxCyXJ7FEWKStMT8wjlKQh7iFBGR7f0Tglhl4omYcUgs\nJMzkiihZicGkwai/gxgruM0x8UGf+bk1xr7CYNClnr/E2HGxzAyuL+KmEmIa0ZUSQqnHyaBHtjzH\ncGePtcwVnFRFCQMkKUupOM1EivDlkPUzFvNPljHDAnaph20WsIsJZj4lr1dZLWepST2ytoG2eEJz\nvM966Wf5wgtfYjkzz6Mdj6LpsPH6Ljld/4R01iqipxJOGGFPFRmEEUqaJa/VeO6pKzzePmZ2oYgn\nPcbPW6RhyEQYsXLhBlJ2TKmoYeZPEddijo9P8RORhdnPoKguuQWJ9VuLXH76HHZF4I9f+T5f/+1/\ngBQm7H90ABjIgcm3XvtXfPDq63z7n/+vIEXkxCwfPPohkR9x2mziSTKHo3tksgr37twmm1WIXB9x\nbKHkPDzdJZJtrl3/Eu7kgMe7e4RJRF6fY33tRbzohI3eR/RbVY5jncxTz3HpF3+VQ2ePqZqOUFQ4\nHh7y5vBNUtVBjo8QBzWW3FmmFI35C1W8jkHjjxt4bp/+XhNVknh25XmWrst4WpVU66EEIuao9qma\n/amV6L/8hecRlAonR4fYtkogJciRRtZOQMqR02JSUUdCQNV0/HGfmfkpRESKpQJhIhBOJtRm6jzY\nO6GczdN39xmKIcP9HU5PmzSGA3RNZeym5MSYw8EuaS+ksSuTzUlMJiFarYAk+kiRQz6sUD6zSqez\ng9cUyc6JdCcPGZ9M8CwXGRs/GlMpawzdJlPZEm7ik82OUWIZ1dBxugH1TI4wlzB2fIx+jma/zdJC\njX4UMOpoGHFM2NOozdWJpU3K+g1C5Zijdg9JylEqXcTsTpGMEnqNbRrOBnutFjIpkp1luhaTSyuE\npsAkPMQbhiiaTokKjuDQ6fXpdx+wd3yAmHo0uiNkf4wiK9hmncPWMak6JuQUN+0zjPrIHljSLMvz\nVd7/9hvMry8SjwKGEYxdFx1IU5EbL36ZfrjJ+SsvQbBDOLTRLYn+TgslkxIqHuOejKaLdManLFWm\n2PjDE4SpECXqEKAQ9j0iQ8IoCpTSaXqDPuPJLqPOHnFsIKs+g8MAQ5pQU2R2ex2uLr/AfBlm555g\n7Dms1i8hOxt8tL1FYsb42hHXZi8giSU2v7tBrjxNRx7Rdly2D9u4yZDy+gA1PI9tzSDYm5xOPErL\nMdO5y8SDAVouYpS2uP+dP5389VPLLMcPNzClBvNzWVZWZ8nnZ3B9iZ7TI2o3yBWKlKtTqFaNbHXm\nk7l37ZOFppiUVAZJAtdxWFlYxlASQpvMp4kAACAASURBVEdiyVxFMUssLa0iE6OZFtWyjhQGeMmE\n2voa0ysGYk5EKmh0TvaIpJhEFRByEoPmY7R6Hsfc57i5RdArovVVdDmHSkihaDLutEn1EseTIZIk\n4sUK7eiA5rhHUu8xVDsoWoBZzFFeiVDyZbKzD5ibWebi6k1GGYH6WY3XfrDBTHiDNzduk8gh2UJK\nsxnS7NzmwdFDPv64ja5W6fYgioaYqY4/njBsxXR6Tdxxn0rJZnZ6FTlTZD9+QBAeUSsoZEvTjNo+\nIzViaUEl0ipEls3KQpXZCxMSb0C5cgUkA6k7zUBQqRRUAgFqVy2EVKE7iNl85Q7dgy7RKOH6zS8y\ncfcQRIt4PELVqkxwGRw5uIpLoim4jSyh7OMGAZYssd9oUrhYZrQHo1AlmzNQl6foxds4J/uMXZ+c\nb2HEOaxZC0Eb0z9xOXt9jcjR2Ils6rUKj07f4d6jbQwnT/f+e3z7j/5b3t/aRrcyFLQsfmuZaDTP\no3sfoE1nGDgBnckRV9avcGP9RRamzrMw9WXkgks2OMJcPMWYPqBzT+Zh4w3MrII3EbHVtU/V7J+V\n/JUH/glwgU864v9N4DE/AfnrV7/yLEHocnIaktNSRFkna4lMzc2SqhrFUoXmfh/VDpienWfv3j0u\nX18jnITU6zl8KYuYprQnPRRJ5tRv0/UajIYOp8MjPLnAWOqjy1kkb0CzPyY185iigiS5VOwCR8Mm\nARFRf0ypPIdhKIwDjyTsYFmLlGo5ukcjxu4xupmhnmQ4GO2SXTSZNF3KWo2B0aQoWAzVEWIyixSk\nVKcqBCcQ5QL2Nw9II49IDLi59Blio8fHt+9jZeBkMkT2i5yZrWNNSzT3VYb+Nt2tCeWrCba4zlM3\nnuRr/8vX6XljFhan6MeHCK5CfsEl7ZUJ+zqy6BPqAWlq0Ro6lBYUZC+mcyqQW7DJdvqEtspEdIkd\nj6xexvUcEjHGa+ko2RZiycJ3e7x06b/B0vLc3XqLg0eP6LYcLjwd0xmCpJYIWn0coYPfyKEVJwyP\nZKbOWky2UwJ9RDhyMGvzqNIJglZnemkWNVbIG9Mcbm7Q7e2Sq5fxkwnDHYFMrY29MMXkcEhSy5AM\nekxVV3DcIa46xBuqzOSLCInFTvcuWW2JUNgmlzEpmVdhdIhQO4/f2CFQTskrWQIhRVZj/FFKO+4h\nDC2M/A57jsit2pMUZ1Z476PvMkgHXKmvEyc+jxsS1bqCHwr8wX/+6p9aov9nzSz/AHglTdNzwGXg\nIf8n+Wsd+N6Pj/kT5K8vAr8lCML/7T2qkOCHCoatEUsx7bbD3mnIzvEp+3e36R3cJxL7mJZFnEoM\nhw6D3gDPc0iiFFFK8Tp94jRFt22c2OFxv0ugGrihQHdwTCbNELsB/aFIkIqoUsjI6WOFCr1xB0NT\n8AaTT5pkOLuMfQU9B7MzNwi8CduH77PfeoyVzdHttTiI7hPlQlJPRJ6OGEkN9LhOL3RZ0K+ylp2m\nopdJ+gL7fpOTzTaLl+fJlAUso8Ltd77DYWNEebbAmdmrJI6MqEtsbQx457vHaGZI6maxCjkK2k3q\n6hzvP/wh1qzKSy+/AGZKvTCFVFJp3FboBzGx1WYgehQqJUTBZ/3MInVfQvdrmEUFU7HQsgWy1Srl\nWp1MsYIsqIxbYAYVBl5EElU5+aGIFV4jQ0qlUODW+ZvcfO7zPPl0gWFfY732AjNGjVJuCk2X0MyQ\neJQlP2cy3jJQlnUKFzrc+rlfJJ8fglegaFfx0zrNnV0++O7XePnZFwg8l/7glJxwgbQ8oj+eY+d2\nC13J4nR7FDM1TgdHSG7ETf1JyhkJs35KTj5F1mTKRgfHC2h2R6ThMUf+MU7nAQ23Tds1mURQMeoc\njjaRtIS12QqGkcPMvEzZn6Ll+2ztfpdbU0+yWnuOVtjkyG1i5044M30eS4w/PWn82zKLIAg54MM0\nTZf/xPmHwAtpmp4KglAHfpCm6dkfZ5UkTdO//+P7vgn8Rpqmb/0bz6a//PIN9EIW1wuYjAc0Ox6W\nYVPJpAhRSm0uT68nUClXqdbn2b7/JlcuTxH6GsurS8SKwmQSYugiEPCD5i6J7DLpB+yMu0yXMzS6\nTSQi3MgjmkxItBzyUYS1qhD3PcI0gayELeZwgoQg8Ihdn2zNYNQJkZKUWGqBmEOORIScR+B6FGtV\nJN1G9lvIcpX1uVWOxy0UPyHKRmiBycS9hzsoIQkK2rRBfODTGnX4yy9/hY/3j3jjX76Kj0BhWUbL\nCqR9jcQUqM1EiKNzbD56gOA6xIqFNjfms+f/fd549E1yRZFJ7wTLXGfsRUxV5njw+HW03JiimSNs\n6pxbyLCXZBBHJzBTQnU1StmztMbbLFZW2Bo9prs3JBi3kEKX/MoSzb0+mUSiP/FRJjpRASxZA8ng\nzPIZ7t17nUJxhfdef4vP/MpVbr/SYP7sMiebO8xet/GiJkfHHdbWS6jOBSRC2pGKGo1wTo6YTGKa\nj8ZY1Qyy1SNjFcgVc/z8V36F3/sX/4jp+TqdpsN8dZ5gcopuFwm1DhP5hHFXwB2aOFqHXJTHVHQc\nJ0QriwwaHey8hqxn8OizYpWZTHI4/h6ZrI0pTSHrEwJvQCl7hl5wQnuSkk4U8kYPW6ogFvO47T5B\n2uNwb8z3f2vn3zmzLAEtQRB+RxCEDwRB+O0foyd+MvKXqtBrDFEEFVMzCYIJhYyJ74ekaQRpShqD\nIiZE4ikJKYpkoJs5It9DFkS6g2MEX6Q3chm2jykoWWKpS74q0Bv0SVyFQd/B7ylM+ir+cR9tXiZ0\nYkzZIF/JUhINlEhCzYm4bofFhYuIwwQ166PmcoQJhL5AZExIRQVVrRIejqjIMY12H1eQeXywwZnZ\ns8RxzOX8U6hKETm+QD7JYs5vED0OcDSYXh/y7sYJm+1vU5mawR+nPHe1z5R+AyNjUyjMsJS7RUVt\n8/T1ZS5cWsDMZrAUnTPLs0SJx84bDdonInff3EYLAwbtU7Sxjckq/Z5IUp/w/uiAnJXQE0U27z1C\nT8u444c8eO99lhaf5vLSczzz7HOs31giyuTZ/ugx1UoNv6uQaUm4ahetOKGyLHHx6gpGXccq1zi6\ne4RdLKCmNmaxSGVe58ZLF0iElHCkUJ6CorZGqAzpnhxwZu48mYzEZBQzGftMSEhzMfnlJaZvzFFb\nPse7t99jcf4cy6svM1+5zvbkMR1TIJo74WHzhKhfRtaKDCddFKFALVugH4WYRgHUlMXaLQy5iOsM\nqQrL9MURuhaSLa4TEdAa77K90UTUDY6TdzlttymreabNWcZjHSc4QtYUjFKN0PYxzJ+skFIGrgO/\nlabpdWDMj3+5/nX8GCnx/4r8lUQB3SBFM8cIKRiJSn84xNBkhCRCIsELPRJSBu0WCAJpIhD7PVJE\nxDhCMLPIYkAoSkgZi0mQ0PQHNO7uI4vrzK9OMTNbRS/Z5PIFZs6vQCoyPHZJNIHt3TZSyUKwbEqJ\nypWrLzCYBCh2gUw0h6q7KIbBqBlR0mfwhgEoLvnFCjuPmuhxiuFOuHnhc1R0menSHO/f+zbv/Oh7\nWNlLbEUPOHgY0WptI9sJufIM7d4HiIHFl188w+y1s5xOVO4/eI8L5XMUNAu37TBKAt56fIchAdcu\naQSBw9f/t3/K4ZshorpApryGWVHoNz0SrQUFkLyY+kydnDJLljXMQpl6qrMyu0YjusMIhWu3Frl3\n8C3Chsve1neQhyFFocqlazNUpyOEmRPEJ3xufu7LfO6J/4C5yrOEoYolqfzSz/0VpJk+sxer+ILH\nF375l7i/+SpS1kAT5uimO1jqWQ4PHvLg0SMse5btze/y3Gd/jpVb0/zKf/Fr/Ne/+d/xc3/1y1y9\nViVjhSxXl3Aab9G+v8vBd/9nWsOP0CWdjNLj3ncOuTF9hTSUqISgiyZGYLDdCyhny6hin1Ab4Dq7\n7Gwd4WslDprHCH7CsNtiND5m43jA8ajHajVDyjTlTJGV+YiMpfGo8T1ct4GXijinLVo7Lbyhj20H\n/49G+LfFIXCYpum7Pz7+KvBfASc/Cfnr/tYRHSdi1FOpVTPIGeMTOrEooEkKkqwgI0HiEU1AlS3C\nOMX1PSaBi5XLUbR12l5A1rCZt0s0xx1GD/r83d/4H9Bjjw93H9I63UNLdyksGdy918ARE9RQQ7QT\nqq7C4FEb8j7dSOdmbZFGsIeZr/Dw7cfcfPks4Sgge7mAlBgUtBKiG9DcdvCsDEpOp9HfRzM6lPR5\nMk89RW1hGX/8j7n80gb6myXM9EV+4HwPz5/QafSpziyys9NFqs7xl56vcZiMsJ8xEZKI4X5CR94j\n1sZcmc5x6o344D7U5s9xuL9H/YyEYI2Yqs1zdr5OT9lmIfs0588u88rH/ww8i8bOY0IhpT5XZ3/Q\nQU1kMpGJOOOSz67TO+hyGh0y8XVqM0uULtpE8QEbJ7eZrp/n/vcavPP7f8ALf/VJlit5bqx9jtcf\nfZ1ee5fy9BTCRCOvV9lqf5f6whTuSELPnWLvX+Fv/cKv8Y+//fe4fOlFFHFEQbrE11/5J0xPX6HV\n2OTt039IrRLTDTRK+gw7t18jMiyEFZWRIjM/ZfN4u4XunsXKbXFnawe/NcGtZ3D8gKV52DsZcNjt\nkNUyRIc6URKztrqCuzfAXswx6D8kmEwTBRHFNEuj3eLYPKB91OQzU5/h2L2HVXyT0tw1CpJDmBww\neL/N/dYxk4lLLrv8J6X6f8SfdTbsR8DfTtN0QxCE3wDMH1/qpGn69wVB+C+BfJqm/xrt/ft8gtKb\nAb4LrP6bQCNBENK/8XPX+GjX4dxCDtmwuPfolErZxsDBkGTmlus8eNhmeXGKAB850VmctggSkWJB\np1CdRS5YNLtdSprB0bjPx5P3yJpXObdwETWTQdMSXvnhaxgW/Oh736A0Z6Ej4/kxRsWAboBYVNAk\nm1ajTWGuzGC/y9KZFfDHhGlEvbzGcBigBzGnkz4fvv0B1UUbrznCUzW+9NRVPti6y3/4t/8OaZRQ\nq83zP/3g17gy+ytEQYa1xc/QONjg+CQkICVXMNnb/h6VispJq0dGL5BL8zzsbrM4naHppMwVy6ih\nxm7rNiVtllB16Pj79CdDzkxd5eN772DULiDrPoJbYCZXxx2PeHz0Q/yBRr6sc3buLKfBNmMxYLYy\nxfH2Jq0DkenFHKXyIo8ODjCEMYY+R5p4zC/aTAID56RP0h6TLIH3cZdTJghJglmwmMmcwckMcIZD\nvnTmGT7ab4JxjKWcY/fRAyTNgmwPxffJz42Rwy/gOjHm9B1GvQHtsEnSFtGMEi8/cQMhmXDUW+TO\nnR8wWzPY31AZKy3yJYvdowamJRHGGkHLo2jqpJpOJExQ1AluX2P+7AzOqItzGJDJ5OiNA2K5hSxa\nOCMPzVZYsOfI5Bu0hjnyhZBSeRnBVgilt2jeFiksXkCLu2wHLZatFyhWbP7eV377J2pY8Z8CvycI\nggps8cnUscRPQP5CNfCDFkKSRZFjFCHCjSJUMUWQP/kyN0hojdrInkx5ySKNYiRZQgxThsMhzc4R\nSslgJrWZRBHjsc7nnn8R3x2C7+L6IpmszmjY4/pnbrG/dQ9VEBk4YxQlxc6VQO/T6exy9uot2s0G\nl1bLdEanhJpGRi1i6hbf+t3XuPH5y2Qsm6dfuko2W6TtHDHpbPD+/RPioM69N95l7emncNwWl8xf\nRnJOcdJFcqrC3HqN3/nhf89Ht1sodZt8IeVkeIqZtVGGZ/jrP/NlGqOrnKQ6rX/1VT66+x6urfDM\n01eRTkY8bByRSCk5o0a/K2LnqujCCYvZJ2llEtp7Dxkd7SIHEgur6/SPR2wf3UcvC8xlCzRbPr5s\nUKiXSAKDyC1zbqVOt/9VJAoUS3VksUAavEu+ViIoiGTss4yflahHKuONd/GKIkN/xIx9hpHg8NrR\nbVaUFTx7nrffeJWbN26w5X2Ejk8mqnHijTHGH5NTazRbKWO/S9ibpR8dUdM93rzTxNJnWavLaCh4\n4zpWfg9Fm6LXOsDUNNKehJxJkWoJra2EQBhRrZkcHE0ozUoIYYgXxGCLaMhoGZ/2oYZdlBCxiL2Q\ngdKiUn6aG6sKrdEYQ5lFjLdRzFu49S7OaYdTK0JwFrGmFNxD/1NN8GcFsN4Gbv4pl/6dyV+pALIs\nEYsxcRwhyyLBcEKUBREZJTUQRBEvTinkc8TDNqFSQ1HTT2rDVJF+r4XlmoQzBjBhqbiGmqbIikks\nhuw0t9AlDalSQgwESuenuf32fWbPzdHc3mQc9UiPJM7fegF5MsQ0BBzZwjBNpgwLh5RxIvOX/pO/\nwX7jQ8p5Hbc3ZufkXWYyT3Lv6COKhVOuXlzho8cfoNfy3D89Ybk8xR9vbdE+eo3vfxXmPzNNWoVx\nIaZgmCT5lJL2BGIwYbO5z/c33kfMltC8lCcuzPCoE+H3JRQ/4vWPhtizIrI0zdnZGt95/XWeuPIz\nfLTxI/ToIXKgIacKQSbPJB1w5frneVD+ETmjyv2PtimrFn6vQTG7iJHzmXavsjM6IlYPkKx11upP\ncnrQI1ZbeHpKNEnRtClCz+VB41XKSoZrt75Cd/CY04Mxjc5HeEGFgq6zET8kF+hculYmah9R0lcp\nZ2y2xydkTp/FyIKfePS3TKKKQqUgkTVVZHL02i1G7im7eyGSnsUbH9IZN6A/QNRDlDhCMYoUqllO\njj2MvIMa6ySRR2kuS1kqElFGNceY2SHZgcYkGDG9VEA0NYLTiFxRZM/pkdx+jeO1mNny52kO96kX\nq7zx7vsoskQiNchHn2VqVab78IAdf/cnM8v/FxGGHnIKmqIQJwm6peGHIKYukiygZC0Uy2DSHLB4\nIcRzUvRsDoUE3bKR9Cy5KEvVLqFHIgfbA5760s/iDyaoVopmaFTtBU4b7xP6AbIR4bopK9eLNNs7\nxGFCfmaW+pLGcPQeTtIjr80jJxF7O31OxzJ+eR9DyNM96fDSZ/8yW63X8dIJ+UJALH/A0lKWXDEH\nxZR8aPDtr32T/+jn/xbP3vwC9caIN09NxtpDVPeUyDvPV545R0PpMz4OKFh1Hjdvc3PhMnfvHpDT\nH1AvXqcjhojCgGphgVf+6HWWbljUSld5dP8ud4SYsj3PD1/9OpKgcXd/hG10mTp7kaW5JTqHG+zu\nvU6rc8BJM4eU1NgZbjI1W+Xo8QPqU1UeDb6OE0Scqz1LM9rnsN0jDRJOnNssVm4gKwpHRx2WzmZ4\nVjpHSZtmf/9j7j1ymL88YEpdJuw7fLB7yHPXLnNv732Wlp/neHib1eIcFKsYymPMoM1f+dIc3/62\nRDpvczI6IiMss3c8AiMiFxnsnTgISKjhEcFYIhJSCorAoCNSqIicNnqctPqoko0Xu6ysL1NesWFk\ncP/hO1wpnCVtu/hxGT+y8U+OEFWZL/yFl3hgv83e3hFltQquhzcY0uq8hTETc+g4zM9d5rTRQ9Kf\nYXq6TGf/fUZ+jOpkPlWzPz3khCDghyKqFINVgdRnPBmRLygkKWgCqJpMdxSztb1HRTUIvAmZYoEo\nmlCNsnzl5gtImsnjYZd/+MufI4xj7okjDuSEINHQlRBZSwh9l85kl4y6gJo9YDUTsDy3hjPep9sL\nyJtrBN4GveE+Ow9iyuVZ6ks6G5sW7a0R4pks3/r+P0Vxp+n5J5TmBcqFCXpSIqeXeLD3mPP1yxhX\nqoxEl+ON3+Wbh68xDiZMnDq5zBIrFZOgLpEPptGLXVbWMqj2kyT7Bzz/hZd55d6vY9sTgrFAfXaF\nD9/axy4JKJLFu2+9gSBGVKaq7HS3SAYaqRIg6AGenCGn1+g+uk/LP8QsFZE1C7mqMD81g5vWOTi9\njbk05mhzn4wzizs5wV85Zf94h6x2gqBV0bwKrudQz2Q4u3KRTvOAk/4Ow9EI6aiNtpphtDWFXhnh\nySpBGHJ/+y5eaNA8eR9ZS5kKsnx/821WZ6eYeXKPb/wo4MAxSNUca5XrHIw3ubL6GbbuPqAv6eRE\nl67QJ+3ZmNmUaJTFScakHpzcVQnNFEPyydgRomdjGWPSkcLu8IRJq0t/9y52QSYd9BHmhjyz8hzC\nQKHR7iOaCvMrIXJbZqCcZaK+xiAy6e72Qc0ga9tIkUQtcGg1v8NkkKWaV0lmzgN/OtDop1YbRpKQ\ny6hoShYFEd+fYKo6vgembRAnIYWCzuFxj6GXkIgacZQgkDLpDQnjiOPhhMFgwGyu+kmnkVTgRlfh\n57WzXFCKqKrA0NlmeLKHEe9y7H4NUQh568EmQQoPWw0cZ8gH997n/ist9j6exjYXcdI2G4N9zl0+\ny9IX80gDB1Msc3S/Rz2fQe5m2XhLx6jMMxjo5Kws3vFjNMtmc9Tm97ZeYaE6y888+Xe5tPYFkknI\no0OBs1PP4DXGTEKHqeJNWvt7+LMiH3z4GjPlv0agnuP60y/TcDawCgLrl58hq5eoz6jYswWOx7so\nkkr5vEIkWiyunUERLcajxzSHPYwZlctLz3OlvI6tRozGXUajbaQtne13ZG4+PcPCao6Fm19kcHrC\nWXOFxJ/gDw8YHbeZHLrc/6jBdvMHSGYM/cuYsUDx7DpnZy+zvHqB0/aQ4+GYWNIR4xUSwyNr5Rj2\nS/zLN7/GaXufrrDDx2/C49NNDDmLMNnm45NX2bl3wvb+x2w/OGSxVmLkqWijGVYqNYyx9gnaPYhI\nVBm1Nua567NcfvkFzs+tkdNEDm/3ePc7H9N965h8Ps8gVeg4E56on0dXXHb9O+gzRYqZRUadNs5B\nnlZ8xEB5DyHNohdlbL2CgoHrj5DSEyoXb9Eet0iUNp9f+yKW9ucQwCrIGrIyISEgDFTiRCAKAwxd\nRpY0NMUga0YgC7hjiG2XKPCRxISQBM/32BjtUFle5vygA1NFhDAhElOkcMwFMcdZY4UHw6skl0d4\nzlXs8jf44L2P8ZwMF6eXub/3KhvvKFjaMpWbAkulhFdfOeaJz57BifZ4vHPE+bl1nv6bv8jjrXvU\nLyi0PrhD5twMz/78WfxkTLN9QtTR8OsLLM1opFjsfbBHTlrhjR/+Ls2DLZ6+8STZgsIfvvoHZLwc\nkmLycP8j1p+4Suoa3N7/Q+b2q3x4/4fY7vOU4nM0hQfk5SFjN8A2TCYTn/nZFfYGEQktnr+ZR6vP\nUirUUAsPWa2t0e59yDe/85vUz17BWJqiLK8zN2qzcekuys4qzckyzf6b3H9tk4s3C8TLh5ypvEh1\nap7XX/89hlGIYA/wT2QmTkqm4JOtL3OwdciDh/c5P/cEJCnXllZ5zzshVzQ43g4Q5EVeulrnXTPG\nSUYYksE7bx8wvTCLl+yxt3+EFmYQbR85F1E/l+fe+29TmZ3hJOxw3BVIpQzjwYRUkAjGKRXTYths\nIbcO6YpTHO61WDo/RZiOGW0YKLmUrOFy3Djhwxhs2aK0ep1H7v/I4GCZkbBPoQyTqExCi/6gSq4j\nU5hep5oVyenPsTHcYXz3dxACgVz+CX547w7udP1TNftTM0sSCNiyiaro9JwJkpQQ+SlJIiCpMgI+\noqBQsDIkQUwqQeD4xKlEnIIwGZGdWqEgA6FIcPoINT+PFAO9HqkdIigyv/7lX+Be94CvRm/Rb5WZ\nrXo8/8x/zGn3IaVkgblnu6yZGY6dEo8fvc3aUzO0GifoGYuzazNMJj5v3n6TaVFEtkXWPv8MehJg\nmWWKyiqVwiwdu8WED2k6d1gt/QLnrv0F7r12SD6ncWb5l9g9vs22e8Bi5QL54oQPjg8YHXVZtJ5i\nZ/81ErHGq/uv8/nPvkjkTnh8dBc3tnjU3aQytcRM9QrR7gaC7lO5eJ+z1q+w2/wGs9Iiu96PyHbW\niIwdbP0CW8KPyIzvkMpdPniY551YRDUWyC60eLDxAFVMOP/ZNQKpTRq7nB4/4N7Ra2Ttc8QolPWz\n9MM7rE4rfNDcx0oqFHSZex2VnJjneOdtQqOMP4hI5BG2ZxPpJp6ioaoqV57M8PD7Ltee+AyD/j79\ngUwkgpd6yGKMafnsRTZXb93i7oP3UPoC3V6IYTVRsMnPnScnQmf8mG43JE4cilMSRmrSH01whzrV\nepnQm3A8CglNiX3XJTOMuDVXwgx/lr3om1SyeQZeBN0B1ZkM0vSE1sjEHe+hSBJvtN8g689w2w0p\nlpYIlFepzF2j05A+VbM/NbMcdPpMl7JECciyQOSrRKJHTIwchyRBRBp+UhgQCjGBHxGLCaIsI4gK\nQrZKoaTi7ndJBZDy04TtbVR7msSzEOSQJAiQ1SwX83Uymc/z+wcu7nzEyf13cLUCzjAhGM/waNr9\n39s7t9g4rvMAf2cuO3u/L28iKV4kUpRoW7IiX1TVkq3EjoM2TlK0KFC0QB/60gItWiB106e+tehL\ngDwXBdKgtV/SW5ogTeG6iYLYThRRpiSaonjVkstd7pV7m92ZnTl94ApRFCtRYpubAPsBAx4eEvt/\n2HP+nTkzO/+wkX+Nce8wplLHH5A0my6KAqZiUS8WKPhiTIxeYmw0APUOW7dv4vUK4sNJ2q006ZyN\n12PRbP0XTuM8nXCapmqQb+5Rz9jEZ4dJxGYwzWU8eBiseHiv9hYeGSMaGkGEajTVCtmqQWTgDLp0\nUFolxH6H1eICjhgiZR6hsAatuRoJ+TiFXJOTyTNII0GpHOPm1ndJWNPIfAyptnFEET1gEx228elT\nxGUDLezQKrXYud5h6JknuXlzmWNzKTLbFagXkKNjjIemubq5iRMrs5Nep9HeJTka4vv5RU7Mn2Wz\nuMx+xktooszRJ2YIdrxkb66RPFNi5XsWlYZGfWcNN2Bh5yT+WIrSXhuf1SA5cJbHhhvcWLiDoiaR\n7j6RkRbDwwNk0xYeX4mGezDGesKLiCQQDQ9Pn5/j6ttL4A2gGFW0jorr8yBcH4oIEA7ZrO+9zU5u\nHcfnp1xt0CgqIA2E1iRiRYl5YxRLq2SKEq8RQQ+0kR2DCdfLWs7DjlFA93YeOmd7tmZRFYOAXyBk\nB9s+uBlLaDq6poEUCH8AwxBIQd+qEwAACm1JREFU0yERimE7Gh7Ni2V3UHSdUDyIk2nQQqJoAUS1\ngOqo2O0iqCaYNVTLRrbryI7NUYK8nLyInZtl01yi3Skyc2KWaiGHIlzOjFzGSvpxEIzGjpCKpFi/\ns049U2Ju5FnGJ+ZRjTqZbIamT1I0cyztr/PGm29w+26B6clZIqNnsSuDDBkJkmMfQwtEsNQagek2\nqseklNtme6+E5vUiJ8aZnZwjNKLS8WRpRzQyb+dp76epdNKU17+LRwuwu1clMnyOofEpcs02UzPH\nWcj+K4vWN+l4dNbaebYydynWd7EMSVAzaVVWyKybDMXmiKrzzB79BG7Rw5HAM0RaY3T2TCIJjdZ6\nmdTRUSKDlwgOORw7lsRU6uznFwlXwtS3dTS3TcOUJH1JEsFJttaz6EMOyYkg9bYHq+PSjn6fPSWN\nKM0zoIwyKFL82rOfRTei+NwhqtkC9c0iMyfnWF9c5NbaLarpIqkjE5hRk9DwNKWOiR4MoooU7dVd\nogMxmjtF6isGG8sb7OR3GDo5wOyTs2j+UWqOgqtr+KIOVqtCXXepkUGEJLJVp13zIHSbRlPgLx1j\nLxNAs2x0T5ATg5MYtsWencYIuWx0ysQSoxR2bbIN+6FztmfJIlsSw6fRETZOxwZHIeb14vXoKKqB\n4VGRqkI4nsBjOBheDUU3cFom/oCH3eVV3FoV1RfBCIbRNAM3naHTbCMVAaoHF5CORDga0m7zuCeK\npniotlT2sjv44wbNgoOCRU4aJGQC736VrXQGRauhq2F212tce+cKyYFR/D4Hrz/P1e+8TsFfwbAl\n/lSUYECSzW4y4CYZP/4Ctf194vsGo5HjzB59ksHwCVwRIXvnh5g7FeR+nWbaZOXuVSqVLex9yeDA\nOHduZ4mE/UTqKpODs+xcu4YMZWkWbuBVStjZ92jVdok7Z7Bxqe3fxSyUsRoWmXyFQe0cxmAKZfAY\n0UCSrbu3yTd32NtYIV1ZIb3RZquxBoEATc1hq1Ll0tN/gLBWOTGtEgyeJOjxsJhVufKDGwxrCbR4\nmHbRoZzNUm4USE02qWebPDt7gWRylPTqDUTNw9j8PNXaOrGxKfzDE3i9Izw7PQtBg+nHT/DKn/wW\nlbqHaiHKyMAQp8+fprj1Lv6On1SqhqPV8EiFUKeFNaKzl61TaAkawqKQdkmOSlJKEKchCfoDDKba\ntBt3UdUEwaDBialztMwgmUWJVY4j3Q6Nss6RaR/tliAeHMFCxazm2SrmaPsTNIsCnxqkbFco5Hfo\ntJoU87WHztmeHYYFggLD1bBUDVwTnxGm2trD0FJ4fCqq1JBS4AsaWM02ml8SiPlp1duIRgfdq5FN\nRYlaNrbbQVcN1GAEu9lCWmU6ShhV0xCqA50WiitxdY3P+C6yXP0/9LEjNJqbGK5BNDGPuPUWjYiP\ntsfHWOwYdzaXiETinDqdZHrmEv/2D19ChEP4jBa+SIiE0LA0l4QMkLEdhkSShbUVEmtlUnOCpcVV\nng69wK1vX8GIeKlZktBjE8xHptnMb1FcXCQYG0J6vFTsEm1pkxwc5EhwmJ1KkYpjEzo1i9UwiQxP\nktu/jdkusZZrUay7TKWexx9p0DFTbOxmcGoa5fAWZqGKz+fnmdMvktgeIJu7ysq1VXzHbBxlicr1\nQQIBP5QsjPEOoUiE9Moyw9tDdFrvYgRipAaD1NdaGPEJKqU04dAgahTa9QJ3awKlZvDW7uuI+hH8\naojwzDTby+sYms6Jo59AyV/h1sLXqXkzCEVjbGCIndwC/qltToU+S7b2HvXdZeywy8RsiaUfBjl3\n4QXMeBPTXSNuOKhUGTp6DDcv8dfqVJsQmIpyaXKG7731DRKek+Q9d0iFBsmX01x97w1Uv0JiTsWu\nuLSaDqobo90wqLducDTyBPsZC9NRMII69fImCcOHaYZI+lS8fo0b726Tmk09dM72rrqL16DRaVIz\nwZGSQExFVby4HodAIIiqeDA8AaLhEA42qiLwCotCzkSLxpFtlbawoNPGbJlIXUP6dLyuQG1baC5I\nBEJVwLKQbgeh6ZwejrNdL9Iu7RKyA+jBGGqnwNjzH2fg+MdIzZ/AipYYnx1H16sEYwFiaodzL14g\nMT1MJJUk6EshzSZttURi/gRHx+cJDJ8kYidoxHf4wbevMRAJc+XONwkdH+T4Y/NEh0dwNY1KuUDI\nP8iFy+eZuXiRsalRzj/xFLa2SDabZurUMLanhi/YwpJphH+J/P4K9eY+Zy99jtDIGQxp8+d//Ht0\nSgbe1DyOrXHy1CmShskTySmitkHT2sL1Z5g69Rynz11G070Q8/LYyxEuvuTDNSS+wTb7hatUHT9K\nZJLjZ54lHgtSVnfBdXnq+csE2jaRYIIwM9hmgon4k5w5PYSyE0Bvu5x66Tlu/e+bHIu/TEx3ubL0\nJaoJi9nzTxOeTHPydxbYUP6b+FOrRKIqtQqcP/5H/PZv/g2vfG6es9rvMyA8bCxd5+TxSULGEBdm\nfpeocpSx+BQivst2Lk8x0yK3u8C/fPU/KNa8LGZuM+yHanOTSiHO/h0dvean6YAwHbxKjNCAw16m\nRv2uQb3lotVthBnDaPnRmj5y6Q6N3BqxhJdqy0tiIEl9o/XQOduzJ38detA+fX4Ofmkek9enz68i\nvbuC36fPrxj9ZOnT5xE59GQRQnxSCLEshLgjhHj1EOL9oxAiJ4S4cV9fXAjxP0KIFSHEt7qlnu79\n7Qtdt2UhxIsfoseYEOJNIcQtIcRNIcSf9tDFK4R4RwhxXQixJIT421653Pf6qhBiQQjxtV67PBQp\n5aFtHNwwtgpMADpwHZj7iGP+OnAGuHFf398Df9ltvwr8Xbd9suukdx1XAeVD8hgCTnfbQeA2MNcL\nl+7r+7s/NeBt4EKvXLox/gL4Z+A/ezVGP2s77D3LU8CqlHJTSmkDrwOvfJQBpZRXgPID3Z8Gvtxt\nfxn4TLf9CvCalNKWUm5yMBBPfUgeWSnl9W67DrzHwW3Xh+7SdbhXAdvDwYdYuVcuQohR4FMcFHK8\ndxaqJy4/jcNOliNA+r7f37dM0iHwgco4fVCEEBMc7O3e6ZWLEEIRQlzvxnxTSnmrVy7AF4HPA/c/\n76GnY/R+HHay/NKdp5YH+/afq4zTB0EIEQS+CvyZlPLHvltxmC5SSldKeZqD6jvPCSGe74WLEOI3\ngD0p5QI/2qs86HqoY/QwDjtZHiyTNMaPf0ocFveqaPKLlHH6RRFC6BwkyleklP/eS5d7SCn3ga8D\nZ3vkch74tBBiA3gNeEEI8ZUeufx0DmNhdN8iTuOgOswEB8fKH/kCvxt3gp9c4L/abf8VP7l49HBQ\niXON7oXbD8FBAP8EfPGB/l64JDkoXQXgA74DXO6FywNeF4Gv9ep9+Zl+hxHkgTfkZQ7OBK0CXziE\neK8BGcDiYL30h0Ccg3pmK8C37k2c7v//dddtGXjpQ/S4wMEx+XVgobt9skcujwHXui6LwOe7/Yfu\n8oDXRX50NqynLu+39b/u0qfPI9K/gt+nzyPST5Y+fR6RfrL06fOI9JOlT59HpJ8sffo8Iv1k6dPn\nEeknS58+j0g/Wfr0eUT+H9bjcOnSi+aPAAAAAElFTkSuQmCC\n",
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# draw box of the ref using 'green'\n",
"plt.figure()\n",
"refer.showRef(ref, seg_box='box')\n",
"# draw box of the ann using 'red'\n",
"ax = plt.gca()\n",
"bbox = ann['bbox']\n",
"box_plot = Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], fill=False, edgecolor='red', linewidth=2)\n",
"ax.add_patch(box_plot)\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"IoU=[0.09], wrong comprehension!\n"
]
}
],
"source": [
"# Is the ann actually our ref?\n",
"# i.e., IoU >= 0.5?\n",
"ref_box = refer.refToAnn[ref_id]['bbox']\n",
"ann_box = ann['bbox']\n",
"IoU = computeIoU(ref_box, ann_box)\n",
"if IoU >= 0.5:\n",
" print 'IoU=[%.2f], correct comprehension!' % IoU\n",
"else:\n",
" print 'IoU=[%.2f], wrong comprehension!' % IoU"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
================================================
FILE: refer/pyReferDemo.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"from refer import REFER\n",
"import numpy as np\n",
"import skimage.io as io\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Load Refer Dataset"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"loading dataset refcoco into memory...\n",
"creating index...\n",
"index created.\n",
"DONE (t=9.88s)\n"
]
}
],
"source": [
"data_root = './data' # contains refclef, refcoco, refcoco+, refcocog and images\n",
"dataset = 'refcoco'\n",
"splitBy = 'unc'\n",
"refer = REFER(data_root, dataset, splitBy)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stats about the Dataset"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"dataset [refcoco_unc] contains: \n",
"142210 expressions for 50000 refs in 19994 images.\n",
"\n",
"Among them:\n",
"42404 refs are in split [train].\n",
"3811 refs are in split [val].\n",
"3785 refs are in split [test].\n"
]
}
],
"source": [
"# print stats about the given dataset\n",
"print 'dataset [%s_%s] contains: ' % (dataset, splitBy)\n",
"ref_ids = refer.getRefIds()\n",
"image_ids = refer.getImgIds()\n",
"print '%s expressions for %s refs in %s images.' % (len(refer.Sents), len(ref_ids), len(image_ids))\n",
"\n",
"print '\\nAmong them:'\n",
"if dataset == 'refclef':\n",
" if splitBy == 'unc':\n",
" splits = ['train', 'val', 'testA', 'testB', 'testC']\n",
" else:\n",
" splits = ['train', 'val', 'test']\n",
"elif dataset == 'refcoco':\n",
" splits = ['train', 'val', 'test']\n",
"elif dataset == 'refcoco+':\n",
" splits = ['train', 'val', 'test']\n",
"elif dataset == 'refcocog':\n",
" splits = ['train', 'val'] # we don't have test split for refcocog right now.\n",
" \n",
"for split in splits:\n",
" ref_ids = refer.getRefIds(split=split)\n",
" print '%s refs are in split [%s].' % (len(ref_ids), split)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Show Refered Object and its Expressions"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ref_id [22758] (ann_id [540661])\n",
"1. woman in front\n",
"2. lady smiling\n",
"3. woman\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXewbXlW3/f5hZ1OPje/3K/7dZgOk7oHhhkYZhAiFMmW\nymCEURmDbRkJlWWkYowBSwjJhUpg2QiXXVgYHDRgE0QQaQYY0gwTu6fz6/D65XffDSfv/Av+Y597\n3309TTNVZuiR662qW/ecvff57X3O+X5/a63vWr99hPeeO3bH7tifb/KNvoA7dsf+XbE7ZLljd+xz\ntDtkuWN37HO0O2S5Y3fsc7Q7ZLljd+xztDtkuWN37HO0zwtZhBBfI4R4XgjxohDi+z4f57hjd+wv\n28RfdJ1FCKGA88BXAteATwDf6r1/7i/0RHfsjv0l2+fDs3wR8JL3/qL3vgZ+Dvimz8N57tgd+0u1\nzwdZTgBXjjy/utx2x+7Yv9P2+SDLnf6ZO/b/S9OfhzGvAaeOPD9F410OTQhxh1B37AvWvPfitbZ/\nPsjySeBeIcRdwHXgW4BvffVBv/g//jCDXo/Qe0QyZKMbsUgzRmHAo+fO4seXGO3dQBrBfu5Z2TjO\neLSHxrN14jh7o33OPPAwj5+/yANvugfdP45KAmJCimJMnc1wPqDV7+K9x3vFb/3aL/Ged72LOIxp\ndTrkVck//ef/Ez/49/4zClMhg4B0ssDh2buxTRI4vHHs7Mzo9QbIIKC3MiDqdPmRf/GT/MHv/T41\nEiE1CLU8TzMPCCEOHx/dBuCluG37wf/x/j6DlRWEEIfHSicAh3QGV1bcnYT0Qo9RAZfzmqkOcbpF\naCwIC1LgBTgBUt4KHJzzSCHxGITweC9QqMPz2yU8pJTgPXvbN1nd2myu+8i1N+Y5GkA0ms5rv+db\nx4jD/wfHHf1/1A73IQDJ7vVrrB07DniCQIF1KATC+ds+K6OXn5nS4GqUCpHO4xdT3tXv41otPjmb\n4eMOOghwR84tpENK+KPf/OCroXpof+Fk8d4bIcTfAX4bUMC/ei0l7Bv+o2/j4rPPs7Hex6kIl+Zs\ndtfRezs898IlutIynbWp5zdJnWDX7+EXOYm2pJeus7axQWk8UkI76ZHohLzKcKok9IJ2lFA7jy9K\nrDXEYYtu1EF4Sb7I8M5jnQRjMWmJy3NqW5MkmjDuEa0OyIo566c3WV+3uEChpCAQEq3g/d/5Lfz9\nv/WdfO8//Cc8/9wF0BLvPM4D3jVfmmj+jGu+UOcdHg6/JCEF3roGjIgjoGyA4gUYKQg96Lpio5Ww\nJqEUgkuZJVUhwmsEBqscCvB4GnjfAl1d1yipl5tvAdQdOZ8SAnzzao4AeznQ0e8XKcXhOHAL9EeJ\ncpQ4QhyQtjkHt3FDgBfL6wbvPELe+kTcqyYa5xxKCrxf5hDeg5QIQBuPlwLtHUYI8A7rJe0Qhlrz\ndJpidIsQifEWLQOEWH5mUmKdfV1sfz48C9773wR+8/WOya5fZefCS/TMKUztsd4xlIY1ak4+dA/F\nNKM/zFGDh1ntrCK2Nth/4nmuXnyCsrZMFyn5znWMKfnox/+UaVriixQZSrwI6ScBVniiIGTYHtIf\nrjJcW6O3uoE0hhoBdQF4itKRTVK6LUe5v2DwyN3M9i8RK0c2muGERiGYz+YUacrKYEg5mpDbm7zv\nix/kJ/7JD/BffO/38/LlbZyKCYXDGIOztgGelngvwMsGG3gQAg8oKZBuCTA4MqMCXiCNwVQLTrcj\njivPXEquFI6ZVvhAIvBI4RFSYo6McYBH7z1KNTRCeAQSvEYIjxDu8BiBOqTZwesPMf0qb3Cw59Ue\n4dbh4rZ9AnXL0wiB9+7WjL70brfO5Ztp45Dz/vYxvcUJgRAQSIlzDuMsXoJEIGk+B4tDAh7Behgi\nlGCUGQg7WOlwQmBxy/MIvFDLSeDPts8LWT4Xq2rB6tZpom4HXRYkcYtiPGdtY0g2H9OSGqfB147R\n1WusDLuYOqMlDYoKM1lAteDc5km2d3c4ttKn31un9JKtBx/jyaee5C1vfQBZ1hTWUEznyCrjY3/4\nQWb7Y4TWBKHGlClXr18iRFBawXg0pveAZTyfc+7cfaRlias9Ck1noEmSNnVZEnfa4D0rG+skpuBn\n/8UPc3VvwXd9zz9gUUq0DjE6xDoBvrwFNkAtweC9v/VleU+StBpoeINazraBsZzoJgyFofYFl0tJ\nHrRxQqAEODxyCSyx9AyHIdwyDHOuAefB81sh0u2gllIeeoik0/6s8PFoCHXw/LXqdEePP7pfSYX1\ntiHwcqzm2uRtr3XOHV6LkoKk3UW4xiPJI2Ma65BK4YU/DNnwEo9HyJBCe9qznBODLnOhSfHoQCKk\nxOGa0FQ1lPIC3J+D2TeMLM+98jI3L99g5Z2PYZSmrmp6nR7ZvKIqK1qxZjyf01UKX6S89MTHiUxA\nHMYMekN8XaOVIp1MCL0nEBZhPbGQmMWUYnKdIL8LmSSUuyN6seSeE+tgauzpLUprQAS87xv/A6r5\nDuOXzmNNzSMPPgw+5ezpexAqoKgWTIuCci/lpVdeZKszZGc25h1ve4hLL1/k3KlThEmb0XjCPadO\n8OF/8wE+9CeP830/8IOouEsQxdRGYYy5NTtahxMgPIgjs3SctHHCNrN+XUNtOZN0GIoaK2NeSg2z\nQOJkjBa2IYtoyID3SKU+C8wHRDkK9FsAPpp33A7wdrd72/YD+7PysINwT0r5WZ7lwCNZtwxPj+Rs\nQsjbrtm5W5AVDjCOYbePd03cZXEImrGEkFhvEWqZL1qP8QYhFMIaQNCqavpBzPOLlFIHhMIg0Utv\n23wHXggcHvH6juWNI8ub3/Iw0VvfQrS5DjrEC4mfLbj04iu8/OwFNldXybMFwd4IqUP8oMf2jT1O\nbLRRMiSOu7SSFnWZ0RUh0zylrBVrp09jhCIJIVvMaKEJdAuvDFEc4J2nXGTE3lFKj8nGTC9fhWqK\nUCFX9y7SKXeoM02oNIGGgXXUvYREt7jvoQd4QBZoHfKudz6G0RKpE7YGAyajfRbZFO3m/LN/+kN8\n+I8/zq/91oeI2j1UoDB1hVLNl6OcP8xhEB4hVTPTuWbm9abiRCeirWq8D7m4KNhNYqRwaGloAjDH\nAbZePesDGFOhVHBLWPAe72/N2kfBcRT8Rz1MM9/KQ8Afgvog5Dv4W77u4P8BSZejL8dmmSMc8W7+\n1v6jyf6t3MjjcHg8UumGQMtJxh6+52UOJQXOS4R3TUjmYC3QyCjh0niCSGL8MvwVzjcJJRIpPBZP\nIF6/kvKGkUWJmrrKSV/cI253KGuHjlucvucugjBgEAdI5WiZmu3phJPn7iO7N0dp28yiUYypKz7z\nB3/I6eN3cWmyy2J/j+KJz3Dq3ruZvXKRD/36HxImK3z9t3wzk/3r1HVFkWX0ojb9VszaWoIPFM+8\neIljLcvqIKZMc9phF5MY0nKOqQTWOjY2TyI6MaNsypo2uNBh7YJ0URC1++zOZrTCCOENe6Pr/NWv\n+Qbe/o638t/80Pv58If/mH/0wz8CSiMQGOFxgcJ7iViGUN57kBaMRpUlx+OEE75C6ISXFgXTWBPa\nAsIYIdwyZJAIwW3A/GxPYDmaUR+A0FrzKg9gl2A/APJRMvnbkvCj5zkglVp6tYNtt3kWLM7d/tpX\nh4SvJsqBICLULUHAOYdHIJc5y4EXkkul1+ERshEJvJQIU3NssMKoNNRKo7xqFMCDvMsdjCtQCjhy\nja9lbxhZwiAhzeboIMQ7gzQ1l89f5oFHH2NlfQCLBVWxIPSeVihY7F9D1zXjGzdpDfvEnQ51XSKd\nZn3Y5q67v4xWu8N0tMelK1e4vLCMpobt51/gv/y+R5Dn7sfKCLcs8TzxxOP02wHbF68SJj3SyHBz\n7yZUNZfHU0xdonRCuih46KG38vLVK/gy4+yJM5TzfZwpqE1NkiQgBGu9AVY0IJtPR0QakAYWu7z3\nbffx9v/zp/k/fuFX+cAv/CJh1KbwvvEwUkJtCFQD3Loq2Ao1W0rRCjUvpQU3tULpkFYlyKVswCIb\n0cA5RxAEWGs/iyiHocbSIxwA1Fp7uO/Pkrpvz0U8DVRuB/yrX/vq0Ovo41fnPQcEPzrGZ+dAjRhw\nlIzOvb4MDY3n9tbSL2vaYcSnRjsQRMvjl/sPBYeGXNLAa1dXbtkbRpa92ZwojJiOx8SdhEFvlXtb\nm+SLku3rl9nsdGhHEaZ2qEBS1xVVVTLY2CCbL7BUFGlGr93Fe0sSaK5f2ebl808x3t0HH4MMyOsa\na2ssFhVblJbkpub+Nz+EHl+kE3iGDz0A3iB9Qa0Nw+P3ce2ZJ+nHEShPWZYcP/N2rP0Mn/jUU2RF\nRpnuUrmKSMaoIOLYxibWWNZOniAOBkS6w3w0YnV9lcoa1kP41m/8K3zDV7yTH/rvfoIb+7sUOLTw\nSG9oKxh2Y2pZs2ocSdzi+TzjaiCJfYJAUkaNVxCHYf0yF7D2SGh1RIVahhzOe+Rt4c2tcOPo46NA\n/ayZ/8gxr5XAv1YOc/DfWnv4+M9S0Lz3jSys1G3X8WoJuvGUt++77T0IELYJN7fimFBL9oRDSHV4\n3KvJLYRoJP4/p/nkDSNLHLb52Cc+xWNf/E4WRUk9rZkvZtBWlF5ycTQm0QoQ1MbgvGOxP6ETWTqt\nFfame7RXBhALuhvHePnidcb7c4o6YDSvqCvDPJ0QRposX5C0FdIEjRvXmjiJKK7NcM5Qa8uAiiBp\nMc7nmMUu7XbEYj6GekI7FPhLU051BXkoCJJNkmCDzpm7+cD/9Tt88zd8FWG3gxEBUrV5j/F86hOf\n4K67TvPyC89zbbyD8oI4CGlFCe//nu/gyRev8Bu//muc2hyy1lYErmB7NGext6CfJLxSZlzx0FYa\nLxxeeaQXKOERCliCxh+orM5SyxYej3IGQQ0+WIZS4lCFFUIiaYDhljmBByTqSGgjkFJxEKc0AoK4\nJSELj7XLZF4KvDtaI3pVkv5qYWHpVZRqpGrkEflaiqWSdeu1B+acw1IhNVi3fE+eVxFLI52lCDXt\nvOR4P2EvLzFhSMsLnATjPXiHkLd7t1eP9Vr2hpHFLHY5e3wNny84trZF4BRsaSrvyTtztARlDU5Z\nnPdkeUFW5Jx+9EHarRVEHKIFbOUFL33mGZ55/DyLNCXLM5QX3Lh+hVA3M1GWpQRhC29zgl6HRAV4\nBXOvSXSEM4JxMacXdqjrGO/bVLOniPKMQTdkMd9nUWqCuEeCpp5OyCnZ3tun3ZeY2Qi7mFGHIbNx\nTj7Z4+H7zhJoyUq0wulHHuDC08+gpULiaceav/a17+Vv/vtfzY/8w/+WOBLMpxPe9ObT5Gfu5dMf\n+zTJoM/KvCBDN3Kpamo0zotDj+FFU7ST3uOEa+oXfkkCr1gW1m/VTpZgNd4hhWyeL4uL/ghAD3zJ\nZ+cRzTHO3q6wWefQS49w1HscEOToNuccWutXCQC326u9xVEpXEow9rOr/s0VWxAK7R19PJ24wyeu\nXMWsdPBHuwxe8zz+tuLra9kb6Fk8N65c4OyZ+1AGfCypBMQyQPoOoqoIVMBssouXgrWVAaNRD8Zj\nqkyQdHqUdY7WAa2gQztJSFoBZdVmvDtiMhsThAkqKPng736Qd3/pF7O6sorSHVCSxXzBysZxTF3y\n+7/9YU70Q3Yef4bU5Az7x9i5foEzp4+x4RKU3uD0sR6lqQm0Zm9nRru7QuQ9vh5jhaSYp2T5iNVT\nJ+nEpyjSBcoHlHnOwHsiFTLs9ZmPR2ysDkmrjP39a/yn3/6tXLh+gUe+6GGe+Mzj/PbvPI4f9Bi2\nV3n7McHFGyOs9IznM6y1mFqgtAYhqY0ljiNMXRIKiZA1ILG+UbDcUq5WSiEVWG8Qy3YWACnUrTAu\nCIBbAHfOHIZ2TQuMxGNvC7sOAH8QOr06JDua/AOHxx71PAd2tCZ0oKodJduBNa89qMEc5DFNXuNF\nk6wH3rApPaktGeuYUOplO8Vn152Mad6nsxb5haqGzdI5x06dpK7AmIyoNaSVxNjaIJUliCXT/V36\nrQ42kNzY3qOoaopiiko65DbDK8Hjf/IRpE+IgwinJLWpMbak1YrJK0eW5vwvP/k/87M/87+CFjgD\n4LBVzb/+qR9nZes4SQKnzxznLe96lCzPqDLP+NiA1X6XMIqY5xmTuM3LL73Emx99K8fObVLMx6xK\nxb22y/mnLvLilcu0B2062zfxKmQlSbB1yemTx6hnKXE7wVnH5vETGFeTLeYoqXH1jK3NTZ596Qof\n+eiLvOexdyO/qI3xAiFKMJJ2t0dWl+BgkVcEYYgSEh0qZos5w+EqdVlgJWzfHLE/mbM7mTGZ7rO3\nvUuazsnLkrIuCaRAeUcmFN43RNFBQJ0XeCnodDoEYYhH4h2YugIUSvuloCCXBPO3VC3X5EWNCnwL\n3M77pRDx6vaXppXHOocS8lCGbjyHvEXg5WOW+9wRlc57bvNa3jukE3gkyjk2+1126wLf7hAYgw9u\nkVmqW9K4lBLrHFKJZZvNn21vGFm6G3fTqj1JLyZL5yxG2xhrcCheuXSV++69myqUZFXFfDInEJLN\njS1qN2U6XVDmMy6/cpXACrzPsLbEWcFsnLIy3OSyvoGtCryvWBn2kEowz0p0oOl2Orzp/vvxleWV\nl1/gnrs2WekPKCZznFBgJwy6iiQWTGe7zCZTkmzI6Po+/p4Fu1cu0+51mY6n3NXp8MylHd7z2MN0\ne11msxlisEU1m7A2PMEzT36aRx57lN5ggC8tqXEYAU5KPvX4S+SLMUEgqFXAffe9DRm1EC5grT/A\n+ZL5eEaRZoRKU+clkYJAegaDLgpHOwnZ3hnT0YJeICHxdIKQk1vrdOVJfJXSjgWj8YR3fdm7WU88\n08Lx63/0Cb7p67+avZ0dbuyMePfXfwdht9NU2GvDT/7Yj2LmKfPZLpNsRppWzLKSeV6yyHMK4zFV\n1XgfqZY9WgEoibEOXKPGGWvQStA0mkq8qBHCol2IsgInXCOfKwHWUwvZhIhSNdKxONLC4wTeLZN0\nKVDWI6zFC4HwAiGh0DWdrEQFG7yysyCPMuIkROPxuIa8y64Hrw8SfkALCv8G9IZ9LubSMYHSlJOS\nfD6n3elQpwuS/gbZbIbyll4rJugLVloerzXeVOzvOBb5lJfOXyDLDWlR0g5jbFngvMPUlss7O/R7\nfaR26DObeA9KK3QSY+qaaZqBLHnpxfOcfuBerly7wtrmcdprK7R0m3reQxZzdm/eRMqIjeOn2R1N\nEEIwmsyJQ0UYBawP+lRFxfrqAI1hsn8DoSUXX3iWB8/dxXjnGmdObJJN91AqIE1zwiChqCtM7Thz\ncpP73vw+KgdVJWlHLZ47/yKzWcnCWGpreeHaZXrtTlP1t02l3jnHdD6n22qxN53R6a8wnu5j2zEq\nbCG8p51ESKORicC4jK/96q+gXOxjbMCnnnyO08dP8+nf/xO0yqilJJAKYy1BrBAaelQsijHrWpDE\nEWG3TVFbKusw3jW5kpR0ex0qD7N0wXiWMptnTJ0hmxekaYozNZoAhEcRUCmBQ+KkxwiBcArkQU4F\n1tQI3QgTwnNY97FL4UEgD+Xeww64pccoFcSm5njUoS5LnDfEUdiEnixDQDzOWfRym1v2qR0Q6PXs\nDSNLEiQ4b8CWBIEGY1hfX6Nwkmw2xTtLlVXUymKKCm09CsFkd8HNmzvYsqAXJ+S1JQpCAiHJq4JA\nayK1grWWXhHjNhxlXVBVNXldEEaKv/U3vpluoohUgAgkq+fuRpiK6fY1bOWoZYuLzzyHxNHqd3Fi\nxsax49zcfYK704JFWrIzzTBFztrqOr3N49iqxNaa/dGEtZP3kFlHq9fD2RqDJIwSYqepipKVlXUQ\nlnyxz3R3h87KGqEQLBYzrK9J2hHeS9K0pNXtECctAqUIdYC1jZJknQWp2Ng8xs3RlPXhClmVURcF\nzgn2dsY4C3EArdDy5DPn6SSSvHLcde5+puOcvFJ0kh6TRYZUHlRTxNRKEGlHhqW2JUHgkKLGVBla\nJ8RBAzIwUM5wVU0HiQ8km1urYARqq5HpnPeIfoeyqrHGc+36daqq5ub+PsZ5SufQKkAg0DogiDR1\nXTW1I+PwDZMaMi27HBqpV+BNI1AI2fQzBB7a1Zy11Q2m8wIXBI2gZx0yDMA39HLC4ZAYu8zLDljy\nhVpn+cwLL7PS75MVOUJo+r0eNxa7rAzXaHXaqCBoJEknEdag0Fx+6SLTxRxTFggnKPISA5SmxNUG\nddBuEYZ4CWGYUJYl3aSDB77i67+GVr2guPYc9WiC7fRJOn2yzJBlFXkx59Rdd2EShZWah978IOUi\nJQg0PpT8tb/+jcxmEyLVI1zbQCvNtcvXWN88jlYO6wXHJOjOBtP9PbZvXEVjqbKM9Poe3d4qUjj2\nL7zSxNtKsbdznVYUs3riDONpShKGlJnBmJpeK0EjaCUJVVk2nkU1/WC1qVGhoKoqjHP4IARTE7dC\nQh0QxzGidpiioNcJ0brF6tYmm3efJc/nnHaeojxFXRfcO9wErRFYpNA4HLuLlMwY3EFuYgwiiSms\nIIk0ZWFwxmDzmior8QgMoLRGKIcVgvl8Tq/Xo1UvkMZRljX3rg9Iohhx7iylsahOm9oZ6rqmKAr2\nMstoNGI+n1NgMEuICiGbDm0aRfBgCYOXEAqH9I67Vvv0S0WsJNumhAgEAUIH+GWvjcOjvML7RsFj\nubbHefOFW5Rc2zzO2sZxro/G3P/gfSA0QnoWuyPe87V/FYeiLEukqwlbXX7jl/8trnZUHrBQVx6J\nZl6VzEyGshasIYkiglaA8h7rBfc9cDedVoyShnS8hwgcNojQskO728Faj9ZNGLK6dYKyKEiikjhW\nDDfWmRQFLzz9FA8//DCT+U36nTaj+ZTQeBYCdvZ2uef4EGMykBFpWqLTimESQScgDNuM1Zgb4yl3\nRS1sIkiOHUfIGFOmhG1PK+hQ7Ke8+PKLvHD+JbKFoShKkpbCWsF8MieKNIYSgaAw0E0ivLUkcYu+\nqakdxFGLqkqxwqC0QwhDrDu0W22itqQTacrxNZyQXLuyh8lmDAZtdm7sMjz7KEJqJB5rPd3eCtl0\nTtKKKMsC75qQKNKKPC/wxjMdzeglLSSa/nBAhaOqa0zZEFhrjRKi8RBeNB5EgRcO5yvwhmIyxzpP\nGEUksWIYhfjB8cazSKhrCIKY/dmCyWzCjZ1tstpTuoBQauJYcPfmkHPr6wTM2TmfUVeGldUe/Tjg\ncpqCbFqEvPANQZygqFK8dQROIJUkbmuiJOap18HsG0aWbP8a4V2nsPs5virxsqa0Na2VPsY4wjBE\nRwHOOS6cf5mNE6d45cJluko0xUvvwZeoRUa716KsKwgD0jInqi1eSf723/lustm4qZK3Qj74G79L\ny0NVTGh125g0xRrP3mROIDTxtX1WVgd02lvszlLy8ZT+sXXe3nkHs+19AhQCibcwGY3prW9w48YV\nFg+do9tbwwmFDku6BJRpSitMmM/G9OM2Z44pBnHEWFdo2UjnIkjY3b7Jotqh3+nx4NmzXHzhZY7f\nfYayrAkDh80tzhnybIFxNV6HJFFCURZEoWJnZ5uk02tUvHxBuliQFjmBUqyu9sjTGVtqwMrWCQZb\nxxGqwhpF0au4mU+wCKbzHKxvuhyUQiEItKSuDb6ucFogtaSsSoIAfGXxeNY21qnLiqgVk5uqUcKW\nUnGvs+xapvGCUgYEWpPmKQ5HWZZ0Oh2ElQQCQqGwlSVQiiAIqKuaRHmGsaKoRpzqSrbCmHvXzhG2\n2swWOVErIB+PCakh3+fadMrb3/koV6/fJLuxTUvVvO+RcwgZsn5si6qqqKuKXm/AbLoL1mCKCiUg\n7jaS9q++DmbfMLKcOnGaay+/gkKwf22bZNBDRYrxaJ9ea0BepKgwYPvSda5e2iYJWygvCKuUbuCQ\nQYD1llBIsAvaypGnM7pxwuqgRdyJ2H32IwRArRS+Mrz9zBDvHEUaYRAUVjA8dpyHV9YwQrG6MqAo\nUtqdDR547FG8KXHpHH36GFG3w2c++qf0RiEvnD9PS2laKztMp1N+9l/9NKiAtfXjxJ2EbqtDN4np\ndzvc3N3m3Ol7uHZ9h5MPrKOVQiK4ef0VpJNMRiUbGy0W5YTpdEa/38GUOaHSSOEJQ4XQIYgK5zR5\nWjHOxgxW+zhXMxz2CaII4yV6rUtdGqazDGcFvY6mkxiKfMzpc6fx7RbZPKWTtNk4e4a1U6sooekd\nd3jB4RoT6y1VVRFohSkNXglKa9BKESCJkzZFXZLmOU4KQqnx1mPLmkgHCB1iyuogO8fiqeuSQDX1\nEAdIrSnrGlfVBEozn84YDofUGMIwoChyjAyZZhXeVPTaCXE7oaosxWJM4DyLRY5ylihKUEHI1vEV\nLu3sIsKAM2dOUVQZayogaMcUo5t4Z9FAUaZoXxNoSeVKVgcDVBSQlsXrYvYNI8vNosKmls88/Rxf\n8u5HMXlB+/hxstQQ+AXaOV46v8P23h6ddo95uqCzska207SNJ0ELh0AFFdJDOd3j5IkVtHaE7YB4\n2CIJQkya0UoCZnVBO1BUpiLqRXSSDrP5BJgx2RkzWFlntjdGSst0cpMoiCjKkm4SMt1LMSQ89czT\nfNWbH+NLHn0b1jd1gq2NFXrdNvPpmEFvjaoTUxFz7OQxZKRYH80JooitKGJHwu72hGef/SSj7Sv0\nwhZIRdBSaAm93hrloqDbTyCIkVqjowrtPd61sVqS1nu0wpDaFPSSDt5YWkGIEILcBwjm/O4ffYa0\ncHz7X383baUxok0+yymKiqg1YHdvgisW7Fy5TF0XdDeOMTz3dggaRUrKBtQ4jxOCuqoJvUVpTW0K\njGjkV5wlDkOKtKCuKrz3RHGMtwavFGVZo4KgWdwmIa0KwijELolXO4sUAmMtKtBMZtNmf20b6RmJ\nECFeaaoSSlFTVhWtSFOXBZv9AVoGOOOWKx3t8l4DtgklWz1KZ1mMx8ShItByuQTZkVtDJ+6BkoyK\nFDczqPD16fDGVfCDFoMzHU6cPs6izijHC8z2NsnmMUyRcenCK1zbbmoqk3BEmqZY56h1gHUeVdfk\naYk3lsrOgdv7AAAgAElEQVTNOTZoo4erZHnGhRt7nG6vc2U/JdYBKlqBziZBWxOpkNHeHlWS4OuY\nbqtLPt6jHFVsrA/J8jkiiPFeEoUhpjaEUpOOxtx/zxnWWhHhsE2FIxtPaWuN8hWDfps6TwkCx6ef\ne571YZf57pxItahNxVqvS6AiemfuYjjsc+6+b+P8409w5uxZKt/E9z6veObpZ7m5P6YqFuRVRZrO\n8MbhnKMUjo4McNailGY830MFGl2UVFUBImaoauZFQWUj9tOM1SjCAbOsYhAP+fvf98P82I/9Y155\n6iKrK0N2d7cPF48dVMEFYOqaKi+aRXKhwDmLQhNGMUJKRqMxxlhEaWm32qhuj6IoMNYivcM7gVS6\nWedvPWmWEUURvKofSwgOi49JkjQysbVopanKEhk0EK2tQ2pFq5UgXNOZ4ExNVpdoHVLXFVGikAKq\nqkTroOnolo0iZ6ylNgVKNsl9GESk8wVSN10AnWGP+gs1wXdFReH2kVFAL2qh1lcYZTmDVpef+4Vf\n4fT6KtrAmc0t0jRlqzekLCvi7gBjLBbHfJFzZbrDd/6Nb8dkc6wBayxOwu7umNXVCOEE00qSdNfZ\nswWRCPEDzaX9XdqdFfbSlKeffZ7+6jrPX3qBeZbjowGhsChr6LZiBr0uvqpYzKbUmyeZTqdEcUQy\n6BM4wXgyotUfEgZQWIdLC+ajCSpoeplcVROFMQJDnqYMEs1kb5vVYcJ4vEtvMETiKLOUUHn6nYi6\n9nSSkMn1m9x98nRTyU5CqrqizitcbcgWE1yeU4ymSCXQbUU6H2FdySRdkE8W3Kj2Ga622ZuMeOXK\nFd79nvfwod/5Pc4e67I/H+OFYHd/yjm5bJQEnHc471Bao0QTNlmawp81liLLCMMWSXyrEp7n+bLv\nSyGcoygrpAqojUXjUcI3i9+ERgcBpjYYY4h1cNgJkOf5YUW+rmtarVZzAwqtSdMUHehlM6ggCAKU\nUgQ6AiRKglSeKIooyxJnLSLgcBykR8sWsZbUxYKyqukOBqRljpSCRZFi+AItSsZRQrWYs9ZbxVRz\n4qgDVcFv/spv4KzA156IZb+sElhXkyQRxmZEUUhaVHzl+97N+VcuoBc3EbWh1VlhMZ+SLQruPrVO\nXaUE3sFsxMWPP47qb3Dq2CaimDHIUtR0h/H2Lu84cYZRmdPtH8N1FIN77ueTn/4o73r3l2JcSTaZ\n0Vtb5+Rb3kpSWbQpQQWMr1zj+Wef5uTZM3zkk59hOBzQj7rMqHnl5jXarRYXLz7FY29/AKSmzmvC\nKILa4rKapL+OSFMwBmRzhxVlamReEEuJsVAsJlBvoLUky3NCFdCJNLUxtOKAwpQUeyO2TpxoCoam\nRJQLyvmMerJLEGi0lZTpjLVhn4/8yUf5rv/4byL8jOsXR/QGKwgb4K3HO4dXEkFAOl8c6cJt7k9g\naXIZLRWmNoggoK7rpjIuRKOE1TXCG+raE7UCgiAg8I30L6VESYkzFls24yi5XFdjGylXKoX3UNc5\nxlqc9RhjiKII6xxVXXJya4soVHjrKIqSPKtwTlBXOXEc0+l0kFJjKsfq6jrgqB0Ib6mrBa4uUCJi\nNp02LS5aopTFlOXrYvYNI8sLr1xl9/plqk8WRIEECw7F1VFOp9untDURhtneBF8L9Nom+/mMJJTc\n/+DdqDii9ikXLzzL2WPvYJrOWO10iVVEstZl59o1kJ7NzTWqKueBu0/z+598nkHsSIRhrZVgnGcx\nh+4gJPIx2oFFEWDYvnoNU+QUsykrnSG+mFOVKfMipy4rqsIStRKshl435kvffA4vJVjPg/d9KdPJ\nDp1Ys3X3Js6XPPP8czywtYojIQw6JC2F0g4dKbLpHnvjPT715Hmy0R6lV0RBm0gCYcjU1nRabZIw\nIaLGlSVVvWAxn4EWtNcS2qcvcmZrj1ZygQvWc33vvcRxTJbNePKPnmBjY40H3/IIJ+55iNliRqve\nod0Z8tKzz3D6gbfgjSNIFDh/uAamkp7KWJxxBMuGRa01SjfVfi8s1hscoJVu7iugFEVWY7yjTBe0\n220CJE5AhUc6T5nmRHGAxUPYqF95nlPXNdoYlJJ4SvK8RqukKcrqAGErBIJ0NmdhDKUrCXRIFLUx\ntaPX7qA8qCUpZyZjvtjFuaap1DtLHAZAiHGO9fV1jDEYY5sCuapfF7NvGFne/r4vJZts0233CKXk\nIx/+GM++uM1gpUvkK/Zu3oC6IIlCnJWMFxnd48d47C2PsD8eoyWcPPcAX/bOLyZAoFXExcuXWVza\n5k+fe4pyntILYnqrA65cv8yZM2dQwQpXtkfEwmI3W7SCNsnqXaSVpXYeUVtWBn0uX7+BCENKX9DZ\najEb7RK4GGdBhQkYS1nntGmxtz/GO0G/v8o8zwlbAWk2RwURWV6CqgmyOdPLVxhnY2oZcNfZu5hO\nDEVREEcJcStgoxXzdV/3VTz+8T9lvrAYJyknY471YmJfogtPPp8xVYaVjV1OnL7A2rEb9NcvMWjv\nIJ+k+b2C8/BDfxv+dJrz3B//e1Qm5r4H3sRDjzzMv/7lX+b67py/8r4fZf+lq6Sm4tL1m6zf3bSn\nHy4rBsosJy8K4igC75C+WWGptW6SeWOaTmWplsuULUnSFIEtniAIGk+iFLUxQNOi4qynKEo8lnY7\nAecospwsTZFSUhQlg0GvaXC0jtoWCOHBhQjh8cYwzsdoqQjjACmbm0846yiKggBJXRSEYUAnilE6\nYDHPqMsCoQRG3lr4NZ/Pqaqq6UKWniD+Au06lkrT763gipKXXrxAGEWsrK9TlSkyq2jFLUQYkEQB\ni9mCe+4/h0lidJXTDyVxHDO9coEkjJhc3aHdbpG0OiT3neORR9/Gcy++Ql96Ot02IvoKrJJkC0Os\nPe1E022tMJ1sMzGa2XiXOpsym07ZH+0xPHEX7WjIL/7cbxG1FJEKgQy8YdjrE0YBxzePEwpBLDWd\nVoe41yVYGeLzglcuX6elNMPBkMl0B1/VtNpt9sYTtk7ew2xaMFztI0VIEEp8VQKecm+fYrrAC0U2\nyWkFcPJUSn/zRTqrlxhu3aDbv4F6wcLHgf+HhiDPAw/Q/H7BJvDl8M7vvcJbvvun+NgffiPXL72Z\nj376k/zgD/wD/vmP/kum0wUbJ89w8p57Obm2wqkH34IOg1tt9lLgqgrpPaYoUUJSugrvwC8XbjXL\nBWrq2uBUk0Mc3N2llbTI8qxZcg0Ya5u6mfW0um3iKDq8M2aoAtJsgbQerCWKIrz3hFHYFA1VsxwB\nb0kXGVEcLeVnT55ZimIBQBAotAqbNTphSKuVIKVcXpMgiiOEFERR1ISGQUAYhmRZ1oR5cYjQ5nUx\n+4aRBdPo8k8//hQdQvpao3yBtiWuKsjrGik03hQ88uBDTEqLKQqsaxogTJHRjRNsVdEJImxlUIEk\ny3KiKGE0GnHfQ2epswWJjphOJqy3e8wWI+L2KpP9GwS9mDPn7uPiM59mo7XJyolNrrxynl5vjfHO\ndfRbh7Q6Q/AJ4/GYyXyfh+67F49EaEXQ7vEVUY/aVkxnC2pTN6v1kj5Xx3tcHk/QSqADweo9DxKb\nOVlhuXFjh8XlC3RaLaQXrLQ7BIEkaCVMpzN0S/HwF7/I6Qf+gGG+Bx8Ffp2GIE8AJ4F30JDjO8A9\nAvN8ndH14xhXcu93PA/fBckvFbz3Z/5vLj/wNKMPPEo/FvzoP/p+Ll+4RJFPmV0NWOxe4/knHQ9/\n+T3AckGk8yRJo3oZU+Nd0yIShJqiKLC+kYmDMCQ3Od7eSsiVUljcIVEmkwlRElEvFgRSEbQltfBN\n645zpOUcgNby+NpZyrKEyhKFEc5YhGja9ZMowBhLt9fFSAhMgJcWpQTOW+qyxlQ1WkgWQFrnDPoD\nwm4HrEEuhQG4tR5HKkk2y7B1Qenz14XsG0aW55+9wc1LlzFCcd3XlGXJooYo7NFe6xKJkkff9hhx\n6EEr7M6I5194mSvdAFFZWq2Q3dF+M/OYgH63ha88stUnbLfpdRNQIUYG7I5GhFFEZeZYU5NlKbVz\ntGWMMBW2Njifke5c5dhgSF2VxKGk3xtQZAWtREInoswDqrIkShJ8lVFmU3qRAa/J0pLBsEfqJLOn\nP8a9p0/ixbJY5wxVuk3uDCdXV1mJVtl68CtJjUUJSzpZ4OuC69c+zpd87afYPPsRot8r4NuAJ4Ev\noyHHDwOPwtR02b405OKFhO2Pr7P4zTOYPCRImr6rn5p8iO//+WcZ/EIKXw6nv/dZ/vP/6hK7l9Zp\nr30Pw7JicX2MXKTcc89ZXh4ZjHeEUmKXzYUqDFnMJiilm4VgCLKiREqFMDlKAEJjlEBLTSgEZVHj\nfEGgA5RuKuLONe36WisEnsV8ivCglcJY3yxkowGvMQZ0Q7S69pS1xQhPDwneIKIAu7wngtCS2tbN\nTZKsxriajvGEcYRUEaU1BE6QyABbVzhgcURtM8bS6w2JIs3GxibT+U0Uweti9s8lixDip4GvA3a8\n948st60APw+cAS4C3+y9nyz3/dfAfwJY4O9673/ntcZ1piDUimyyh8lynIBhGKE7bUyZ8t53vxMl\nDFaFOA933X2SU2eON+16xuBxIBytrZPsXN1jPBszHu1QFCVbwrB2fIuJdWR5SZ1X3Lx0ldGiINYa\ngafX7cBlSf+U5fq1Paq5pBVpOv0OcZQgdI8qM7RkwGy0YGWjzxOffplzp7aoFhPCVkzpLF6DLGu0\nr9m9dolwuEmaL2h1OxhfYYuSoeqzN9pHoZhe2WbQ6yNMRj0bkxcL2r0XYOUDPHbmCcTPAP8SWAf+\nLtTfJLl8+TivvLzCxadXuflvu4z3LUooijpFKYFwO7SjmLADOgh4/Mkt/ocffyff+h9+lPs+8Un4\nLgh+KeX4z/w4C/lJ5vPvxoYhE1NTFTNq1V82FHr8slPYOku33WW+mBMEAaZq2pGEFWghCJUiL0pC\nFTVRkvcgHGEQHt43zB6EX1oTa01d5dRVTRAEt1ZZ6malibWWdqdDaTImkwlSapyQTVgWxwhXIZUk\n0QFIwWgypj3sN9esBNYI7PIWtJgCZ5v7jM3TlKqq6Pf7dLtdtNYYW1NXDtUwHmccpQxw4v97gv+/\nAT8B/O9Htr0f+KD3/p+J5jcj3w+8XwjxIM1d8x+k+QGjDwkh7vPef9Y60jCQSO/odTpY77GiKYit\n9BMevP9BVFQi8OgwxhYlQUtisxRbQ5JETEe7qFDgQ0G/FxGIhMHqCaTUSAS6spRFzsr6gKKquOfu\nk1yxHc4eWyegJp/NcUoTtvqsrg7odvo8/9wz7OxXBKpABAl9WUI5Z2+cIXe2sbrFr/zeH9NvJdgy\nozaesrLMphOOrw3QYYDuTRFhjxuTlLCTEMcBk9xQ6wif1wRRh6s7u7zp/oxW+1eI7vpVwkuX4R/T\n/Eba1wEfgPm9Xc5/7E2c/+/PUpQdxoVhtkiJAkE7NlTWsdI7RlGmzCdTnEsRRYESAfeePU06Lvi9\nf/PV/Hye8fd+8Qqdn5vDl0Pnx/+Q9S+LeOoP3s1iP2P9WJdga526rklo4nwpoK7qwyW3ZVmig4BQ\nxFRVhUWSGUsYJ7j/l7k3D9LsOss8f+ecu99v/3LPyqpSLS7tsmRZ1uJFljHGxvLaZm+zDTQ2M21o\nNx1shjbDGHowMDN0QzQNTZg2GJsGjA0YY2NZlmVLtnZLJZWqpNpyX779u/s9Z/64qWJrNBNMMPL9\nJzO+jMjIijrnnvO+7/P8nkIgVPXf+1yxDNDv9/9Okf+c9VfISimNAMeuagYv8NEC4iJDGEOr1UIp\nG40kx5A6CjNJqTs2msoW7HseotBIIQg8F61slKlOoCyKq2uksi7NZJ4bhCZJQpLGSGEzmUz2ayPo\n64J2y/3/tlmMMfcIIQ7/vY/fBLxq//sPAZ/f3zBvBj5iqni8c0KIM1Q36/v+/u9td5vsrq1hypxG\nK6TRaTG7MAdSM9g+h70yixd2ENLCiJIij/BsmzjPSOIJruth6xKRDjFpjCc12BVLV2YlZZZQU5I4\nnxJISTHdQTVbOKFFf3MTzxGgNdFgk8X5FkVecv3lh9jY3eP0qae46oYbsXsDAq9kNgjpbW9y4sgM\ncVRHKpeNnRTLslg+djmt2RZ5OsTyQ0ph73ePNNK1KKcxsdZ4UUY6HuEFexys3YWaex/1z0bwfwFP\nAP8KOAk9c4ST917B2l8sMJmOieIx61vPMp6kBLU6ds2n0ODXm6yurpOXhlroYNnQDP3KV6OHONJi\nfWuThcW38Qe/OebOtz/I/E1fhNfDzNkvQHElswuHMXpI03eROPuwvxJNNccpyoIsyxECsjy/5Dkp\nigzH80jijCxO8WveJQ99luc4tk2z2axsw/u24jTLyNMU3/Ww7ArCoXWJ8hxKKTBCVizkNEPaVQOh\nAk2KSoGtLIwxZEWONgbLql6KeZZj1xRFCcr1SeIxJYZpHCH3kwOkkEynEVqXJEmC67mUeY5l20ip\nSLOEuU6XwHn+vfBPrVnmjTFb+99vUfVgAJb+3sb4RyPyap06h44ewlOS+fk2tusyHI8ZTSOczgq7\nw4Th2iqFFviiZOPCOus7e8yHit1BH7/mY6KMa258MVmeUGpNYLv4zZB6q0mSarSQeIEPqkQVLjXX\nx2iJF9bo7+7Q7S5iWSVZUpBmKUIpVo6doDm7yJlTT3Gg7jONMpr1kAu7Q+YWFsjSMXVHMuMNyI1B\nizmMsfFqIeNpBJMxpYwZrO2w1F0k0wZqLuPhp1k8ejeh8yDid6iuWjPAe0C/xWZ340ZY/2Ye+cqE\n0daAzdULTM0Yzwow0sJr1DhzbpVGI2D5wDyD3h7CCFaWlyiyiOlwQG1hgZ2NDbr1HBXWcYzLxoU1\nmkdm+fSHXsa3/eQOzg2nEL+X0j5xP1/65IRGu8RuHWFF2BigFJVjMCsKjBHYjlsZpIqcNC9Qjo00\nimgyxXEc3KDKOcnS9JKHXu4DLMq8Op2ElFiui205QEngOZUCOM1BSISmkvFLsJVLUez7702JKEpC\np7raFfv6GCUltmWT7evRxuMxSimiKMVxHLACBoM9lmc7WMpGG41UFpNpjhCKvND4tkOuS7TRlAKS\nqA/a+2fZLJceY4wRz5/k9T/8mW85dBsBZRoTTwZMRjDOck4++SQ3XXscvfssxxoBcVpikoz2Qp1X\nvuF1bD79DPNHDxInCVZREgYBu4M+ruuRJzFtL2AqNa3DS6Rb25zd2uDi2hpRknHzqw9wz+fvQScR\np06dIpnG9EYjSm2YbbWQClZWDuP5NXa2VxnVBEutJgab4y++ls/edS8vffGLcX1Fo9VkFKXUOwex\nggCdFVi2xdQuqfk+C8sh2XRCbqe0Vj5Al/vg31Ndtd4AfATSqzsMzr2Kvc/fRN3tcu8Xv8BMs8ve\nzg5JWeAHNT77pQe47vqXce70WRItMLEhnCY0gpCnnzlDu9shmU5RSmJJmyyK8JSkFtZZG/cJZYi1\nO6UTQ779XTjvfR/8L3DNw08xOvWNvOiW2+j1BUrGSOpII/c9LQVpmu/XHTayLCreWJqAFDiOg9aV\nlaIo9aUaxLbtSyCIv03AFFSyH8uyGMdZpW4WFmR5lQig1D4I8G+gFs9d3fK8qiVsqyLel2WJbVcG\nt0taMsuqsK37MxRLOaAlJVXnLUljlFL7BNFKr2Z7Lsq28IIATXmpU/aPPf/UzbIlhFgwxmwKIRaB\n7f3P/35E3oH9z/7B84u/8sskkzGurbjp2st5xctvYzaYwUwnZLtbdBybbDhGktPSgrW9XYZPO3iN\n+SpUKM1whUUy6dNQkulkgK8syiJh2N9jaeYExi5ZWZ7j8LEjFMrC9Sxuuu0GiHNueeXL0VlKmWR8\n9KMf4dbrrqIZCLJpSVkIbjl+PZP+JjJLsQRYrofrddkbZ6RIRmtDsnxCp7RIC43tuGzFGdHmGru9\nbdLRmMNHQl76xo9hf2wNfgJ4N3ASJvI4/WdfTnzXNTh2iC9KRr09rr3iSu770iM8fXEdu95Cbycs\nL13OIw89Sac7Q6dVI40mTEYp2oQMpyVrF7bpdhugLbY2t/EshckS+hurzB1eocxdkiyns3iE+Hfr\neD/SQDkjrL8e4C7cRWfxrXSPeUwHO9R875KDUwiJ7/nV/X4/HCrOU9J9GftzCytNU+KkupJeGvD9\nrRZtURQVJ0wIpKhIKrmufDNBWKeIJpiyRBsQ0pBkKYUuCcMQozWW66KMJCuy/Y1gcBwL0DiOx3Q6\nJU1TkiQh8CsPTVmWNFotUlHVR0qp6vqn/lb9pHMcy+LUxU1Or24BBv3PJKT8BPDdwH/Y//rxv/X5\n7wshfoXq+nWcajrwD573/ezPsLN6HpNOEGnC2vmL1LtzzHYXCP15TJnjOJoyH5IXmgPHj7C2t0t3\npUOW5gipiLOMLJ1SRim+E6ApSFVJtzmLTjSW4+KUBUWRYJeKUgskGmXZ5CKByZDCDnGVYanu4LoW\nZWDIoxIz7dHwPHINTpKwlU3wZ33mOgHTYkRroYaFQ+BkZHk1eLu87hLLLlvdJk13m+Ov/E9Yv7YF\nvwl8Eaa125g8ezvxuEU8SdF5hPQLmo0a6aTkwYce4dlnzlGoEFPYXDh3Fs/3OXrgIFIbdJ5jOS6J\n1mxubNIMmuxuD1jdqCLtzPaEm68+QTHqYQcOXRXwxLl17FrAV6OLTM6lnPjjyzj03kfhl+Hof3uQ\n0dZ9pIMhsnEL4ewhcjPFLiVZmmNZbvV2FjnTOGcwGSEshWVbxPuJalJUC/BvM8aeawo8dzrkeY7Z\n54FRVsldjlJVk6Q0GA2uLYiziCyrrmBaiGriH08oHUmuBTqrrMXPXaXiZIKUEsetVdN9BFmakufV\nKMJxHCzHJt/XlqWFpkwTijJHYTOY9lmeb7E0VyMI6ziOy6fvffifvlmEEB+hKuZnhBAXgZ8BfhH4\nmBDi+9lvHQMYY04KIT4GnAQK4N3mH2FiKiFo1GqYwEMbQXvlCGlmCDozjJKYr33tMVxjCB3J0pGj\nGAP20Q7Prq2hjaKYjkniiKxIiHsjLK0ZTAbk0pDkOQePHGY8GtGp1cC2mJ+fw7U94jSlPTeLVApP\n2hRSMXPoIMYLkZ6PHTax45w0HeJKAXaEZRS1JOXY0TnUxi41z8LRDlorZKlpuDaZKUjTBF9JWnMD\njlz3QayfGsFdYL4I0/RfMj11G3k2JRv2SbOS2cUVppMxvV6PNCuQ0mJ3PMWfm2c8TVhotzjUbmAh\nsZTCkpKNyYSkVNStBlkBpZEMplMmuwOc0GJnEJH2hyz7s0RxzNz8HOvbG9AOGN1ykFHtevQ3vRf5\nEwmz2xE7jQdwWt9E4/AiAgtHN9jbexLLqlBHVXerBCNwPQ+kRGuDZYlLgz1rv9NkWRZlWf4diJ5t\n21WEnWWBEEhTpbHpsiTPC5SWaF2i04IiL3GVoNAlaRzhWi4IG0UOto0xNrosKpyv1riOV51kCoq8\npCyqE+6508OyLIosx3VdommlKBiOxtRqNdKsrJLXlERKh6IoCIPg+ffC/xPf9Z/jEUIYXcYMV89j\nZSmMEvrThNm5BYbRlMxolq+8nI999A+4/eaX0Wp3cFpNSm0QxYTdzS3aS8skuzvI0OOJxx/l2oMH\n2Tm/Rnd+jixOSOIUTWVGsj2PAkMwu8JgMqAzP08+HOBZFqMkQQXg5ilJVrC+tcVWb8iffOSP+KHv\nejuD/iabmz3ml5Y5/uIbiNcvEnYDkmlEnpWcO3eB+bkugbBxHZuMBzl4068jv38Kq2A+rijGP0qy\nfR3RpIfQGckwwarViErBeDjGQTIcRvR7PU6e3eLMxSE6zbhuaYZZYVBKkJfV3KJXVLSYcZKSFJJM\nS3I0UZ6yO5kgZMmVJw5h8oQjx48ziEvWtjYJLMHc0UPIQvNt3/EIs390D3wNdj94gKDxU3ju1aT5\nE1j2zVhcxXu+63XoMqyEkiaGDCp9cY7WEmMqRbgx5lKd8ly98FxNoZQizyuKp1LV9N93XISprmel\nMTiWTTydkueaoijptHwyUyJKUK6DMJKGo0m1RjgBQpeXJPyWqmqnNE3xPI/ppJrAh2EIcEnS8tzp\n5thVu7j6uyqJURD41b+qyHGl4P0f+hTm/8e04v93j84ok4TQc5nqAb6rGeyt02i1kE7Ao/fdy0tv\nvYX2jIdROSURBWAJzezSHMmkh7AraEGrVkNmCYutGr3+Dk3Px7EM0lKkOmK8u4vje/RHE4pGnd5a\nQle47Ay3qQUu480BZWHYm044sLiMU2re9Na3cvjgMvJgh+2jGTXXplQ5K/PzDIsxxDmzMwv4s4sk\nkxGz7Q65OcnizP+BeHMOAZi/cMl230e5e5Q02sMiZ5pOyF0XaQxxf8yoP8APWxU9MzGsnT/Hq152\nB89+7XHmlMaxAqRy0HmK8iXBNCUuNZ5SFdKnKMlNxfZqaMlUJzx59jxvvOOVbO9sULgBM50OcTKm\n2Iu45YpjpGoO8wP3II7CzC+s0l/7GCb7YeTxl2B4LYiYO77xdfzln92FVFWB7DoKU5TkxkLKKoPS\ntu1K/rJPyTd6n2rB3yBSn9tEw9FwXzE8JfQDhJKUeY6xBBNTEoQ1PNvDrYfUHKeq86VkvLtNw3EY\n5THCU1CoS9P+PE1AV63tIsv2vTiGLMtwHIe8KCjLstq8loXjOJdkOFKC4yoEFXfZSPCt5xdSPv9P\n/xkfI10ef/Ik58+dZ3Vnj+3+lAsXLvDAQw+wtrnBuTOnmZ9pkE4MOndJI115H4SiSKa4noMlBMIT\nPPLlr6CNYiJtWgsr5CjcWkgmBV6nje36BLZHEIacP/sUykzQVkyNHLucslBvMuxtI3REOt5gZVai\n8hHGWMRRRtcCS08xo13Wp5sUwwHdwCEe7RD3Npn21xjvnMep/R7i3+cwA/ojTYZrH6CIr6WQGcay\niDOBwcd2HMajKXs7fZqteYosJ5sM+fL9X2Y0mfLs1x5kIbCo+3XqnoNnW7hOiMlcPLeGbfnY0gVj\ncK9jDJAAACAASURBVKTAERILm9Cx6XgOs7U6m9tbeH4NM53ikRNQgTrEYo2vfXmPZ9YX4J3Ar8GF\n7a+yvvkyPPs3yJNlzjzwAOfOf41vfv3rGPa2mMSaX/zPv8T1Ny/zgz/8vXhuFQSUZdULyfU9vMBH\nOjaWbe2zjwV5XiKEosgMwmpihbO0Fy/DbS+iarN4nWXcxhIHj1zLkeNXc/T4lRw6+iKWVo5wzY0v\n4xWveB2+71J6CmEFZHFEEqXkaYEwEj9oIVVIrTZLszVDp9vF891q6i8FxrLItaHV7iCVTRSnCGmh\njUBZHkK4GAFJFpNkMeP4+T34L9g1rCgiTn7xbjrNJtrSLC4ssDfos73d4/LjV5AXKVYQYJQk2dsF\nFOc2VtFxyZXXXkGeF2SpxvFt1lcvoDAMRxP8epOH7v0yaTIlyguMnjIexVBoGq0Zzl94hrrv0W62\n6LRqDPoDZrozXH/l1UgFSZ5gezaPnXmWm295GXPtEKEzKEqU8ZBKM9zcRLk+VhiQuAHba89SQ7Nw\n47tRVwzh4zD2P0AZX4mSJTYp0XhKHieURcFkPEGXklhrTA6j3i7bqxt0Fw7x7KmnGO/ucmRmloa0\ncd0mqTZo6RJnmjjPiYqCSZIxTiIyNIWwSIqCrMjIdEat6SEdw8rKCjs7OziuzdzMLMPREMe3eeXr\nb2Gw8yVuv/6/wY1gzkti6xHiZAY/qGPFG/zRr/8kz5ySrO6sklk2B1+k+KEffTu/8NP/BZl36PUT\nHLdBXiZ4XkBZ5sRxBWpH/w2hXghB4PqsvOg6JtMI3/MIgoC5uTnCMCTXhu3tbUb9AXvbO4yTceUv\n0XDZylFGu6ep25pSu2T5CGGqrptlWVhWpT7PsuySPSDLMlzXpTCacZJQZjlKc0mC47rVlN52HWwl\n0ZQoS5BRYNKcD/zOn3/9XcOkVHS7MzRsh3S0C5sXmW732dvaYcOGmZl5onGffNIjnWpmlpa4+sgJ\n/vpzd6H1cfx2AzeH4eYGBxeXSZIJM50W0nNZeMedeI1FPv3hD3Hnv/x2tFKgS0yRk0UxKolJTEa9\nXieJcx594DGWrr4Opx1gpCEpE5auuBpdpJSWwHFCEgHSqdO/cAbHs1ACcmXRqNdxlw+heQh1ZliZ\n2C5vYJ++DqMjPM+lTDOEspiMI0K/hi6qqLZaGPDUY4+jdAZ5zPraWcYbGzR9BxdD4NpYlkagmGYJ\naKoiVlgoCb5tI7Um1uAoh7LQFUlyMqW70KIeevjOApPJmN7uFvPzi0jl8MgXHyIrU2684wC1a1cR\nX9SUr/4s7cY7ePJzdzF+4mluvuEmFg85aGvKH/7xF3jTnd/C//a+X2W0G/C6b3gJn7v7XsoyYzqN\nSeIq9NTz/Ap5LPeL6/22caYzglqTzswyn/vrzyKVQUoL3/ept9p0uh267VkOrlyG4zso6WKpSkb/\nl594klQWpFmJkAazP4jMsowoqhoQz7WO0zQlDMNKkmP2WWS1OjayktjsP0IIpmlM6doouX/jUy7S\nff4R/gtXs2DY29vD7nYJO3WGm+epd9pYvT4hFmlvhyIrECZjPOixdHCZ4eY6ridJsjEOLgUGzxEk\nUR9fgLI08aSH64foUcqxq45WIAYypBIUlsbKC4o0xq+5FPkUR7pITeUjKRLCWoApMoQtiVKN8D0m\ncYo10XidgPP3PUK3ZqM8RaPZZev0UwhLMXP8s/BJ4E7Qk5soihLbrvLmjZYkUYrvh5QG3KCJpQSD\n3i6BUliWTWwKSA2WUNi2j0ZilEBIg9IGS0mkyPbjvEscS1CWFmmZIXQJWiBlRW5ECOp+yGBvl1qt\nVnUdy4I0mQI5tXaTNFGM1y+ndvsq3AXBNzxMLN7F0dtexU60zd5uQnd+yGOrJ7nxlps4v36aN77x\n+/jkH36c615ygi/ddz9RBEoJLGUBijzfn7xLfSmd2XXtSgLjugRhg2uuvZ6DRw5UamNjkMrGUjYW\nVfxflKX0BwMs5bAwM8skisDWxKnBlDFKW/uninVpQz4XHRGG4aU6qSgLLLfSngW2u++IrAx3Wmuc\n0GcynRIELkIbBIZOWH/eFfvChRkh2O6PWDt/FkPKfLdJtBPzxOo2o8GUcbLHXGsOTxuQFqfPXkDU\napx55hm6y4tsbfaxbYdmrYFQqlLERgXKb6MzQdBt00vOkWNQykGXOVaak+c5rudCqTFKkZRjnE5A\n+8hB8uEumSkp8hSZCaLdHl7ZIVSSrO6jA0F3tksoNEZpPNvCOC55VuB3Hqw2y89APnk5pRb4tosp\nM5J0ShB4aLtgMhwiRQlGMR4PyIoU43qYZoeNJ07h7tNbMNUU3QiX0pQUGhBVJHiZSxQ50s5xUIjC\nIVcpRudoNIXWjMcT/LpPuK/cne12SJIUpEGnGZ7ncOFCm8XbgX8HZfmXEK0jnRDvpR6zcUBgH2dA\nxnnj8arXfBNfveczvOL229nY65OXOUoF+F6JEA5aVAu20EUF+M6rYeB4PEZIiRIOllI0Wz6j/oCH\nH36Y0WhMvz+oAOK6cjpKKcEWfOd3fA8zCwvY0kOZFJMOUI6DsQxpFmHbNQpdYEp9afCYF+JS29qy\nLLJx9Xv7UbwvkbEIAh/XdSAvMI6D2V+LYc0nV1+n3DCNRRk0uPnmW7FIMUVBWmRcc/NtmFgTy5TQ\nq6HjEbUg5MyZsxw9djkH5pcrtI3WuJZDb5ry1FcfZGGmgbYV42hKbgR5CYOddT71zGlSk2P7Hmla\n4rkB9WYdipJElzRaNbY2+jz4yO8wUwuI8xTbdasA1WxKkeXYYQhCcfkVx7kYj4n6u9UGunCesBbi\n+0MOpBfhcTCvUKSnr8ZzbfIiJopGOFJiihLQhIHHxtpFilwTeg67usAG4t6Quu0RRTHSC8m1ItE2\neZFWs4oiIlCSQpYYYVMYiWeg0AWZr5hEE4YxaJ3R8C3yaEqjWSOaTKm7LpbjEgQB69s71BoNRmnC\n00/VeOkPCuQTBifdYW/4CR6/7yR3vPHHmMpTjLbWmGsvE2WbPHTq83RWcu648w4GW1OeevwpHnz4\nLEJXchMjn6tTKgm8slSVVY+hLEC51U3AGEWW5Tz99NNIaUiS4lLg1nPSfYTAdR10UTkhpSqxbA+h\nbNJ4ikQyHE9xQ4+i0BQajBTVcNrzMQKywmBbVavYsixMmWFEgdYCbMMkqSCAUlaQ9SyNcd2v0+Qv\nYTQd3yXb2sBxFDKoY2uDSXbQSck4KfAbGjNaI9otOFQPEaM1pPTIikpbhBDMtEJuec3LKXc3cS0X\n3a6jPQ+RFKTjHq5fww08SiEoioJHH32Sm1/+SrI0QcsSdIooJWVuKLIYv91BeAHJeMh9n/0zbn/V\n7QwTqLsOcb+He+AItetvQNsetrRozC2Sx79aaRdeA5rr8fwOioIiNzi2hP1pcjROEabEsgz93oAr\nLr+aYTRhOozob+7Q7XSJopxEWERJiokyGg0fPylxEfjGEKiSRFdTb3SJcm2eOX2WwaTPgYOXYWTI\noL9BPZxhMh4z02ljScV0PEK6DosLi2zv9bHDEEc7DPrzdF66CfdC9zWC177jpxgNHqe3u4PvZHRb\nM6jWDjurG8y7Nltn7+eZZxK+/93fy/AX/zOPP/IExgQYqfblLCVCaKTtkJcFUkCSZFX7G4Vj+8RR\nVmVSKonWBcbIv0nvMgZTQBAEDPp7uBaIUpMagchLJArHdUmLnCwvybIc27Yo8xKFIIkiJFRBtWWM\nlBJLSdJS4YYNkqTK8RGWtT8slXjaRroWyv46BVYYCmY6NZwyYZQUNBoOBTFlqcmk4sLWDgcOHQcd\nM+ztUu/MM44ThFMyGQ7pLs4hACMjfNejXxQo26XMC9wQlGfhqRZxkZDqCmRg+3WUFkzTiHxavbE8\nrVGU6DRHKAdTxGRRhm27zHRnSdIJbn0OS0I9VHi6QZEVODLFwjB84iHCE5+7VK8Uo5soshhkiabE\nUorRZEI9rFEWKXmcMtNdpN8b09/rIfbZwvk0IpUOhRA8e/5pjh09Dq7HudWLbKyusjQ7w/Glebqh\nh2U7jKYJwyJiZ2fM4nyX2U4L3xJYtmHx0DJb4ymOq1mam6coMoJajRxNmka06yFYDr6wGJxfpnP7\nJnwe+jf8PmsTybXLVzLyXCbZFi2vyWxnnsXlN+BKTZ4+w4d/63fpdhq85R1v4+SjT6GpgoTyPK86\nS0VGMS0RusT1fSamxKAoC4OwFJPJpAp8NVQnUW4uXZ+EMGgtENJFGIs4irGVhXDc/agJSZwXSMuu\nVA2WizaG0PcpiqqIF44iH8cIy3D2/CbGknzjG95E0AyRumTYG7J97iTC5JjSJjM5Uit09HVqKxZS\ncnFtg0efOcXswQOUO1vUgxqjvT6nTj5Nvdnlw088QdMu6XZnObfRww9DxqUm9H1G01X6owF21fIn\nn8Q06g2Wl5dJplMsz8JCUmu3K4KlqVBLw3GMr23cboMsGoJoIixw7AiRlKxe2ODAkcsAyeLyAcLO\nLLgBJopJ0oTclAjpMMkyMBkxE5rWw/AZ4D/BdPMEvlVQFhnCpIyGPUCjy5xpPCWwfAYTzUK3y8UL\nF7BrdVKdo3UBtqRdSo4eO8a0LNja3GWwPaJR79AMArwixfdaRIXFRBf0CsH2MIO4x9ZggDRwZLHN\nfLtOYFn0enuMJmOm0zFz8zN4tl3JUcoUx/bIc9g4f4Ajtz8IPwbhT5/h6rlrmI6mWLV1uo1F+ju7\n+GGb3clHOdB6F7t7d9PtNvnVX/nfSeN5hlNJEJQV1E5WFMnnGGF5WZKVBl2AJWXFJbMUu4NeZcxC\nYHTO3xGmiwIlfJSwiSZjarVG5akpC4QCaSkcqbBsRZ7uJyYbgy5zPGGRTCaUsSDPM1JbMMwlk7jg\nQx/9JO985zu56+4vUeZNpFqhMJIkjStOmSnB1J53zb5wrWMDS4sLzEiDcmx602oKfs31tzJ/+Aqs\nomBleQkpDFmeMRmNUbaknkY4aUaZ5Ry68nLiGLSo7Ku2NJBlZBgCy+crjzzO4SNH0dMRw+GAsBVy\naKnLk49/BZuSYZLT7/c5v7rJkYMHuerEURbm5kn2+jgzHVzPZZKVJKM+RBHpVKO1wQmrt+jW+VVq\nM19h6Z4CroAsXMJmcf/+a5HFGbUgIE/2mVj7MeUIh+HODiqZYKTEMoblmS79/oAZ32USpTy22ifC\n43xq4cSa3qRHMV+jVZ8hkDZpaXhms880MewN9khNgVtrsXNum6vyjINNjyKOiUYD5ue7ZEmE4zSJ\n0hjHdkAYCpmx21tBv0kgTxpq9Di/+RSbF9uUfJXOzJXMtFewpE9oTynV3Xz5gTO87Xt+gg/9yd2U\n0xLpxvSnE7I0545XvZpnzpzB1ZKCgrd827eyvr3FxuouwiuhNMhUMhjsIdAgKj6BpgoYMkWJJQy6\nSJFkxNNtsjQFZVHkOUWhUZbEcVymaZV1WW+0qAc+SigKpRCtgu2tbYzjsba5QWk7SAF13+GPPvph\nYtpMrBlkqVFWiquqDKBS1iik/7xr9oVrHRuoh3XqSwak5jJ7FlyHaDggigbM+xb9rXM4YYcwDPCU\noZhOmExiThw7zGTtPCYaUxMBWFVme5lEiDzDkg7jpE+tXWe2FpAWEbOXzSGkjTAe02iC50sWswJ1\nfIWjBw+zub6Kb2LynVWUBl1MCFVOcmGTRmMGU+bU2g0+d9dnuPWGqymjjJW5BWorm/BB4A0gopvJ\nkxhLSNzAIctzsjLDD0I2Lq5hKYljKTZWz5H0+tiOC0LiSMG416PuhRjH4cz6gNO9MVM9JrIlB+sz\nmLIgEg36cUbDc8knEY5fZ5SPWWnO4dkKneV4nRauzJFCU6/7jEcjpKrmHoFfw5EORVYAEaXWlDKv\nuFymCgYKRIvrrq8x2v4mTm2vs7b9WU4c1zTCOxHl23jrWxcpCskb3/x2yCb82R9/GkGXRqvNJIrp\nzM5xy6238ZY3v5nNjU2OXXYE2WxSAFILRoMxGxvnGfW3SZOEWt2nzHUV0RcKLATKdvjI7/9XkiRG\nGU2elaRpjO16lMWYJImrFjmGIj2LNiUlhlKaKnXYrrwxQRAgCgedpJTCIR6NyURBHnaQ6YAkPsu0\nmOCKDuHci5Bq/nmX7As4ZxH4YUhvd51GMyCXhng4wAltovEIp7mEKyVxOkV5FtGwT2umQTnagyzD\nCxokRYrvC6bJuErwxTCcTmjNzeMJn0cffRgzP4vrWQgjSOMEowTSUUTDAYHrUo4TpDFQChphGy0k\ncRxhsowszwjCJvk0JkmmuH6bdBhDlNJuBmRG4tXOw6PAeyDuH6LMSvBtMq1RtoUkZ3d9HUSK79ZI\n+0PEtEdYD8hLQ5Jn7G5uEbjV3GCcaGzXZrHpgvJQOfiWTT0IaNk2geNQkzYdt0GjLJiqFJ2V2EZS\ncwOkEgSeg3AMrp5gCVBSUOQpUTrB932EURhhcB3DzEIf8RBwJWj/GI6cJS3vQ7TPYG29hKsv/2Ge\nufCHzBx/EXlxhq986fN89r8/y8Mn/4qffP9v830/9O942+vfBJHiLd/yrdz08pdhSYtnvvog/a0N\nnn3sQSy/wVW3voTZlUM0uzX+9Xvfw2c/9xl62ztkeUpelqBckjynoVxKU2CkxnZsiiLHCzucuPJq\n7njN7Rw8dJhGo1mhXaOcxx97jI//wcfI4pjSKkiydB8ILlFGIRR4jl0xyLwOQqUIZsnjU3TqNYQK\n0KVNbHukUe95V+wL1zqWhiiJ+dpjXyPSGuFIRAEGi2uuvZYvP/AYe+MejuVhJTlt32YUDdgYRdw1\n/SsC1+PAoYNsbG4gqdqOZVni2YoCSZompEXO3k6fLMuohSEokPUWpizphD6TYQ/XcomE5OL2Lhd7\n6ywuLpHlBZ6ARlijjBJmZ7toS2BFU6K05PzmNnZPIl2P+rEz1Wa5DrZPd2n6YRX4U2QoISqJh+OQ\nxCM812EYb+O5DhfXt/FqdQ5fdoLH7v0qTVGxhAMjOVbzOOw3kZbPZhQTZyWhUnSDgKbroArDnF9n\nNR3TD1zkfgovQuKqKr1LFxqDIU9ThK5hW4rpdEJ3pkt/OEDaksDzWDy8B18AboHx2CEIIRq9juHw\nVvzml5lEj3DNkR+gF32eh+9zueHKa/gz+de8+CXXcGHtr/iRd38v49jjt3/jV6n7Fs888AQLBw7z\nyFef4PKrLqfdnCeKRky3Nlk6cLCKtXMtPvzRj/Ctr7+T8VgglaE0BZ4rMeQo6XLDS2/if/6R93DD\nDdexvbbL5uYuYd0lnozQGrxGgwPXHOS2V9/GW97xNn7q3/4o/e0dkiSuWMrGkBSaNEv2ffclaWIh\nAlFdzTDkSYEWGbYUSGETpbvPu2ZfwGuYIQhDbnvFyzGOzdbaOsp1iKMEV8MNN72E1soK26ubdKRi\n9WsPckDUuf3otTRnOuzt7mEpyWDYw/d9zp8/z7XXXs1ge5Pu3AJr586ydNkxTFGQxDFBvcbm+gbL\n11xHtLmFKWIcx2KaZOR5xs0oaq6HNDm725sUtZDRbp/55Tlqto0V+hSm4I3f/GqC0EfZNrrYRPUm\nUIBe8GlvX4UkR6d7FMZQxinxaEKt7jPTPcBkNGYSjdnb2yao19HG4cL5DUSh8FTF0ko9QZkLHCqq\nfNdTaNelGXjMhYpQ2QgjadkWQVEwW/OJ8wzX2LiWrGTrtkCJDEdWYIZoMsGyLZqdThUeJBWYEksZ\nmnNnK4jfO0CWLdb7P839f9ngtd/8fp6636N2YoNx82eI4n/B67/xRi5u3M3PfvDHsYqX8O7v+35E\nnqNwuedzf0HLdrFVh2n0BQbxmMF4xDfccQcP3ns3t7deznQwJuh0QQmCsM788gHyC+tEcYE0JXU7\n5Of+w09x/Y238Sd/+Bf82q98mNHmB4l7MY3QZmV2CSFckIqw0eTf/Py7+MpXnuCm21/Mb/3Bh/nU\nH3+SD/3mbzHZG1TRF0Lh2IrJJKLlV4ANXVoo3yBtG2MS0jyhFAJPuUj9ddo6BoMd+JQ9Q2BpvG6N\ncZpx8PACWZrRiwZY2SzRsIeHZj60MUmCG0+J+wWhkigKROgwHOyxdv40x1dm8UTKyS/dzdLMDFun\nT+L6LllWcO7JbQ4cPIQe96h7kr31Xdx6DVva1GxDno9BgownzNRgyyRsrj1LaGaxXJuNx3c5sLSA\n53rsXZxUlMb6A8xHwLWgk8twTE6Rj0jjAeQl2TSpUE+UjPpDLGOQomBuYYHt3hjPr/Hglx7C1Qql\nClypCU2DXFmMswpAV1MSJwio2TYtXxGbDN+r4dcsOp6DFjmJE2BrG8eRWLZAibxSLNg2WVESRRGd\nTotkOoVmC991KcixpKGzuFltll+BB+4/iNM+w513LlE6f8GVVwUE4TLRsMNly7cTT1KeOHmOV7Vf\nxc994NX85M//GN/2lqdxsoQZF5I4YmsvxSoyRibm5OlHecu33skbv/NbcJXA9lzS/Vht17X5nh/8\nAa5+0VXcf/+jfPpPv4CnA37/P36CzdcnfOJ3P8UwmXLZXJuZ0GG5USMe94mEBcIingz4jZ/9Db7v\nve9ivJUSznm87q1v5o3/4u284RV3oCcRgsrFqaSLkAohBbZVq0YJSpCmVeCsUjZCKyz5D4hdf+d5\nwST6ICvYgVejRGK12oSNJrs7Q3B8lDZMd4f0V1cJPI/2/Cy1Tgur4TPII6TnMNkbUcYlgarTmT9E\ne/4gW7tjVo4cY1oI2s0WltcE26XmW6xvbSBDj1wZbNchTkvKQjPJDUkpMHnKaDJiZzAgxyLJc5rt\nGfK8pNFyObN6lmcvXkCWgovnNyjdZyti5HWQDJfJ0gRyjSM8TJbQbLUosJiOKoXCaDiiWZsnLW08\nO2Tn4gVMNqbVDip1rjG0XMVyu8WJ5VmOzXZoeiG+coknEfFkikDQCAJqrs9MrUbbcSmnIzoyoW1p\nahI8YXCkRKclli3wfRvlOEhKjCgYD4bUwgDt97A2K4lHseTwkjuu48rjLqsX/orE+i+06l1stctw\n9BnKdMSF1ftZmGszmvwy7/qB7+bI4dfiWjYvv/E60kKztbWHTnMG0RhtC4LQ5Xd+6YM88Ocfp7u4\nQCHKKlZDC/aylNe++nU8+eV7+ev/+GEOOjE/+uP/Bk+2efhPPsfR+RbdehPHCLTJyYqUKBvj6Byn\nNETRgPOnvobVkTxx3+NQGozS5MWIT979KV5y222YTOBaHmWpkfhYJqIgwJQThEipegQ2UgtSYUH5\ndZrPIoTAoIjznL3hENf3sRQ4M4t89BN/SjIecvOrX8v8scv4wqMPce1V1+LPHWc4HCJcj7u/dD/z\n7Ta9eEqr1qK7fIDHTp9D1WYoZhYpG12+cuosxy47iL24TJFNKIdDdp+9QJZEkERgO5w58xhPnT6P\n2whYnKlzoN1GhBZt2+XG226jVgsIux2UG1BsbVDGKQ+ffIrLjx8hXPp4Va+8GnR0GKVk5VnfV9ti\nDLbt7DN1LcIwJO4NadQDelFCUeQszc0jBxE15eAJG08KbAeUYyOERpfQH03I4jGH51dwQp/drfN4\nXhPfmjLc2qAZtpA6wtFguS6FlOSlRTTOULZGSg+ExFJO5Rj0LaaDCS863qtOlVtgY82jMX+WJx+Z\nY+XgS8k3FzGqD0HEiYPv5f4n3s6xA2+iVQoefyinOTvm//z1/wnlT5ibDRkNK39+VqSUXsDG9laV\nqbk3Qjx9ils3t5i77AhGSKQx1EKXydMXuf2d38LxK17Mf/3pn+PDP/2vObLyai48/ShBWbAcCp7Z\n6dG2BbljU0YpJrBBCaIswZDywff9r2ye73Py9NO880ffgbRdSgk//8sf4Jff/4vcc/c9lVvSUii3\nTWHPgWii6wcRGchCkytFUabQ6Dzvmn3hzF/GIJVCKMXBy47SnZ0lqDVozc/z1u/4dr7n3T/AkWMr\nLM3N8do3vJGZxSWCepOjV15NWeTcevNLOXriMC+9+gRXnjjI5QcWeNFyiwO1Ej/f5aCX8ZqXX01Y\nbtIYPsVKM+PY0YCm32e2FdGdS5mbybj15mW+/btfy1zD5fqlIxxttplJJXowwI4LNs9cYO/iNqun\nzlCzXL76lYd56ctuw3Zdgvb6pZMljw5VQDmqODchYTjoVQE8RU5QC9FaEwQBw709Fudn6TbqGFPS\nHw3RAjzfwQ8cbFEiKJAiJ0mnjKZDLEcxGO7x7OkzjKIpF/fWydHMLCxhey6ZUIgiQeUxoW3h2hWN\n0XVclFK0Wh08t0ZRaozJECrBaT1bUd5uhovnFvjMX+2irIsYL+euTz+MV2uzvnEX/fJT3Hrdn5BO\nTlGWbVqda/jvHznJ5z+9yrRvE9qCelBDl4LETEmLiLi3R9uxsWzNietezMUL6+gcYgTaQL0sue+3\nf5cUuOy6y3nJDa+iI8dML3yVVs2niHM6EqQlEZ7PsMjYSifs5SM2J5ukZcbZQY9TDz7O677zm1l/\nepNf+vFfoMg0GHAsyY+9/9/y8c/8KfX9CAslLGzLBpWQqcsoncMY/wSlexRJQCNYet41+8Jdw0Sl\nqp2MRxidISjwAwdNxmhvh3LcR+URACbKsPICKxmTWClWGePKAo8UlY4R8QQ93CUeD1FhHaklIjGI\nxjw5isJoZK1Jngks4yELF1m6lLnClS5pAt2ZObJiyvruDsYS6MGEeG9AoHymgxg9jVk9fYaZdp0s\nHgMpllqDp8FcAdPeHJayMaZEILAtl7DWwHJ9lhcXQUukFGR5QqNep7+7S7tdr+YxlMRlQqGqiIQ8\njRmN+0yjKdFwTJ5MaLbr2K5Hd65DarlsJZqdomCQFYx1SRnUGE8jdFGAztGiyhwpyhKjBYNejxJQ\n+/zfspTUZtcvnSxfPT0mTw5y7bWaJx97mpe9VrJ9/hxL9XcxE/4rErNDZyXmyae/wFVXv4VP/+UX\nydICXaZ4nsNw2KcsS+p1i15/A98W7Fw8y1Ir5LJjx7nmhhuQjk2w7zyOjOGV7/leLGEokilOOKLu\nKAAAIABJREFUfQ4tBS1nlbIYMo5SPOkw124ySQac3V3jmVGfJzbPMcljPNfGbs4xmg74vd/7ELuT\nDdYfPkNvY5dkPK0AfpbBchRf+Mo9zLZbWCLDyqdkozNYg7uwR3cj9v4cu/9Fyuws463/IYjo0vOC\nbRZtDAJBFOecPXOeZ06fZW9vj631NdLxlHPnNtncGtDfXCcqYra3NkjTkrKf8PSZizx57v9m7j2j\nLLvOct1nrrx2rpyrujpHqVtq5Wxl23LEgG3AGLAtwsHAOYA5pAs23IHBxoFrHLBxDhjLUVaWlVNL\nnXNXVXflsHNaea15fuyWbO7FutzrHzpzjD3G3mvt2lV7rPXV/OY33+99FqnGKnMVl8WyR13otPyY\nSqnKaqnMaqVKpd7ClxquH1NqSVbLHtPLRU5PLzA9VeTsYoXT80XK9TrN2Ofw7CwnlpfZf/YcZ4or\nTC3OMb06z2xxmeVyCcuy8H2Pqbk5quFRxCkJk+BGfZ0qkOuRSplIVSGMYlTdRDUM/DAARUOogjB2\nsVNpIs+nUanhu21Mwz6v2o0IQgXX9fDbDsVGm8XKKj0jeSDCUGx8X6JqGUDHCBLspkfeTVBjSdzV\nxUq9gRAgVEEgPZAdwznbMlFMDS+MUbUuKo0SPYUmHAN5MVyzfiPX7FqjPlUlCX0eeKCBmnVx4hco\nlR4DsR9buZ4v/fM0l+zZDkmWOBRkMzpO4JNJ5+jryjM+mCOT0ZCGSkhCSjdoTZ2m3l6l1qgQygip\nCHTVwB4YJk4SDv7gaZzGMn6kY6OjukWCpIIMAwwBU2tniTTBzi076E/30m1l6VVgJJvDNAzStZCr\nb7kOv9XmL3/z90mnu/iDd/0ZqtQ6Xsm0+eyXvsBlN16MrecwIoO8FqGhkYiIROroahdZ5X/jalgi\nQdc0Rvu7O6JD20BRNERXP0IoSDvNzMFnsG0NNXZo+GA0FW5+7evA6LAPeyZ3ohCj6jr14hpOs0Yu\nk8W0bbwoZGRyE4q6kVjNMLhJIREhShjz5COPsXdiE5qhEgud0XVbSKdNomqNMJGYuk6p2uDpJ5/g\n1bfegmg5hDIhncuQtmwi64XOeuUCqJX76RroQcZ1BB5etYKRCISMSaIIVZF4bQfbSFGPOjOqpmlU\nKlUm1m/gkbPPI6RkQgikDAniiEAqrJSKWENp0qk0fWQoHptmdbGCEyW4iobX9tBzBWqKQgsPeyDF\nZF9XhzDQ2QtHVVU0Q8NzPQxVwTAtFAmTWyKUA8B2aLgGBaXN0yf28aNvOrz/49fStXyGMzMlhno3\nYugVkmCND/7tIQ4eDJBKDsdTUZSESEp0VcPUFbw4wm2HbBod46EHHuGKS/eSNi3wE44+9BTbdu/B\nHNZo6AqFfJ5IkdiRQencMn6zju82CdUYJVERQYswyiM1i367m8ncADgRQiY4cYAZKRTiJhsu2cNy\nqc6Z+/fTbkd05XSeefxxWsttPvX+L/GeP/o1EksgspL/8x8+zj988PM8+KMqYVlHDQS60Ih0A6RA\nEerL3rGv6A4+dDRdkefg+x5pkaHeqNGT6yUIFUBh/vRpMiNdRCKhZ3ATjrtAa3kF1UgjhEGspVGD\nFn6s0XADenvzKCoEiY+hqRhWjlariXCbqCJB2CpmxuayvdvRvBqRtDGSGBGG+JUSBgpqELLSWKWB\nhZnOs7Z8DkOxGBoeJgp8Cj05Wlqr48PpQCY3R316hrTIgBqTMrPEUUQ6m8KLfRq1Gi0vwUcjZWWI\no5Ao7GAzHD/Ajy3qvotuqIAgURUcp0PXrS36fP+Zp7h01zbSKcHgnk30ja7jwLGTXHntVbywvETG\nCTj4wH1cnR3AskxELNANDcPUsCyr485iGyiKius6ZLt6SPeUXkrBTp7Q+fDnaySZ9fzp+15D3Q35\nzpeOsXPPrcikyeatQxx8VuH04Sd5/9+8nf/5Pz/V0XJJjWLJo1Guo6guE+PrmJ+dI5GSHTu2EnkB\nb/3T3+ah+x4iPLnM6bkGu2/Q8GwTdU8XQka0Kg0qazUarTblyKFebCIDEyObJ1A1lsqzmOkYU7YJ\nQ0Ghq4u1Sgk7DSgpyosrDIwPceToMdZffBGLJw/yiQ99kpGxTVSn29z7yYdZbM6h9AmuuOZK/vCP\nf4M7f+NN1JtVfv62X0RVdBLTJg47LQMvN16xYFGFCkKe70sBVBXH8cjmCriBS4RCRslB2Ka3b5xq\nbY2wViZtWaCYSDR03UQkAcLQ0BIN0WpgSBO3XSedzpJYeXw3BNfDThn4voBWDFZCqxlS6B1GVWxC\nv42meIR4pLMZDj51gE07t9NXGOK5o2cQUqV3pB83jIgQzM0vMVsc4pp3GShfCcj9axP73R8nmvpz\niCV2OkUraOO4IW4YoOkKllQh6GigWu0GIDCzeYrFGtJIWK22EMIgZWlIV5DKakzY3bDQYqbW4Psn\nj/O7//136LUNhG6TzSTMaC4lkfDEI4+yad3m87OIiapDlPhksxmC0EO1NFK2TUQHRBqGLYY3LMNn\ngLdAuZzlj37rzWjxGjt3tZhZvgZffoVqaw5L76JUUth7zWW8M/CZPqWzfddmDhycQ8oETc8zvmUn\n9dVZ2vUiCZDSFHLdWXQry5c//Gl++R//GkNk+OB/+yOu6CpQLK8SBj6ODNDbgKFQaqxQdlYp6Cl8\nwEPw3Pwp5t0yV1x8Ia2ZZWw1hW1lCfwEx5JEWkKrUWf2cJGtl13EzNk56o0mUkb887/+d8IkIl/I\nIYXk9NQ8xZUSPdkise+zbsMGYlV0XDeVFEEUdSCsLzNeMXcXN3bRhWDh1ClapXmymQythsPC8hKq\nTEhUHWFa9PeNkrKguLKEITTOzS9TSGdYLhdxWg1ifOJIEsUKI2Nj1FoNHNfB9dsE7ZDhwT4UGdNs\nlIiTgO7uAXr6h6hXaxiGRb53kFqtxLED+xkfHkcPXMYnN5OxBP2T43T1d6N5LrVGmyOnz3L4uX28\n6U2vIRSS3tEj9Grvh8uBz0Fj262Y1ffgeSUi3ydoNVABt9YBDti6zcpqiVy+wMzRadwo4fTpGdbq\nLk7D5ecv3cZg1iKWOh6CROicWG4wnhvgyAPPgp/QHNHI9g+xWKrgVNsIRwU01u3dRRAXGezrQjcV\nql6VulOnf3AAP4kY7B8lTARNp7P2euMffhNzow9Pw4e+eBk33vF3pNNHyOgeS805Hnu4zMaNYxw4\nfIjJiY3Mn13i4QefZXFGI1FDgkB90SKM3Vv6ueGCjRQMlTPzRQZ6uzA0hStvuo3x3VfzuY9+jAsv\nuRy11KRnYoRHv38v7/7YhwCVH3zwi0Suz/P7HmZ6Zh9DdoYmWRYcj5YmUDNZ0lJg1NqsHxulGPis\n1NtUGyVsoSASiZm20VWN3/7zP+aBe3/EqYNH2bl5AzfcejN7r7qEb3zt20hXsG6on/t/cDdvf8cv\nsRAs87q3vY1br7qGSN9MsdUkFZ7g8NSZn+ru8ooFS8NvklI1GuUSFm3iwMdvN3G9gDBSOX5qlmsv\nvxT0Tqdb7HkoxCS2zcz0CXoyWWh7tFQFr9HukIRVwfBgD26rgqJI8mObmT5xjMn1k6BItDgmRGKk\nOiXUwPOJYzCzBR6+5162DvbQZ5oIIApi2kGTlKFDGDG9XEEbHmG1tMauzWOkTAgdn8L675CZ/Xd4\nE/AEOOqdRGsX4zstYrdN7LkkQRMFhaxpM7+wCqicOz1PrdlmfnGVqiNxHJcr13WxZ90ohmbhJxJN\n1VlqBpiRynBmgMpCmcYLpym2XRoSXB300UGM0THWNA9LC+nOmEQyxBEeyJBcIU8qlyUMIzTDpuXV\n6R2OefVrvgWXQ+uMwle+8Mfc8Ut30mX0YRomd/7mpezZtJvv/+gxdl98GUeOT7EyvYDnK/iOQZR4\nSNkp/0sUiKu854230qX4mNkeqsU1JkbHeNU7f4VP/tmHee9f/CVf+e9/ytDEKAuxz8wLJ7nxzW+j\nvNrALzdJDfTw7XvuImiukdVUAj1H0t3FqflzmNks42MT7NywkWcfexzXbYNuEUjoSqUwUdAkiDBh\naMM4b/uj3+V333kne/vH0HULXwjGN+zi2Wceo3cwzc+/422M7tzMxMYR9HSedr3C7a/5LSqVEDU5\nx4kzB///WyEJIcboUL/66XTpfFpK+bGfFZVn6QaCjl2NkoT4TgvbVNENFaFm0OYVcKvEbR/dzONU\nKggc9Gw/7Xab/mwaU4mR7QZpTUHoKulslqTdIB05xO0qZncPE315hOxwQPS0BUGAEoSkDB3LjCGK\nOXTgGfLdPWR7+3CaZXKGTaPqIL0ATzrMnV1kdHyIVF+BWDbQZIznCjTVYvaZ6+i/6Ah9HzgFrwf7\n6c/guz1IfwA0g4rXwLSzRF6El5iEiU69XCb0IqqlGr4bIISJouuEEir1FgM9JlHooEuDblMl1qGl\ntrEmMji9O7FaHsSQUQQOUEsClMBDtwzkefGmr/gdjZiqEkURmUyGptPCskz6xhdfWq8szXczNlpg\n7tQidz//V7zz3Vu5cLfJq274Db57zz6+8eV7EaqNik3oJ0jFQUrxUr+WICQ2MkwvrnHN1lF0YjKa\nhreyyjc++GF6vYh//c07KdcrzDVLeLbFTa+6mkOP/BCh6NiKTjlYpu1XkdIlMvtRdItWrUmPbZPu\n7UKzNO564H4u3bSddODium0iLcPi0gLCMHCFRDc1FhZm+Yf3/TUZRWUtaJA3bEI/4MlnHsAQMXuu\nuIprf/62TklZJARKRCZl8Uu/8ma+/PVn8YMsnDn4U2Phv1I6DoHfl1LuoJNw/LYQYhs/RuVtBh46\n/5r/GyrvNuATotN88B+GjAMiwLCzqJqBncmhGTZGKo/a3c3Y6Bh6WifyIhqeQ9f4OMLoJYhDwmKF\nqNlGaioBCal0h6/Rdj3MdJpISCI9DZkMR6dnaToeXjug2GxSdQLacULZDTm3WOf+x15gaN0kg709\nnJ4v8sK5NR6ZnuVYeY19s8s8eXqeahTwwuwsxXKRer3FvlNnODozw4njpzlw/DT3fO0qWm/phxtA\n/HKMueHjqJZDqxWSShVwAjDtLNMzU/hOk9LyGlEcoCsKA90Fevt6qFTrWImO77mUK3U0dJJEoBlg\n6ILQCwi9gMBt4/k+LS+g7EWUmh5+O0IoBrpuE4uEiBaJ38ZK2fhRp7Oz2nQwDROZQP/46kvBcni/\nIJUss7p6jl9610fR9Pfy+X85jmVmaLYDNAzw6VilEkCsECeSUEIgIZQKvg8PPXeQhbrL1PQZEjUi\nsTT6Iw29WsYwUyyjsmnLhXjVJo889iz7j5yiWluj0qhRbdRZLtdoxgrtAISp0oqa1Pw2Z2bOsLy4\nwjt+9U7aQUC96aDFKroMGJ4YI5IaKT1P0I4ojG/k7Llz6KogIWZ2bZnF4ix61GDD5hHe8s5fRNVl\nx8lUGBjEoKn86q/cRtQ6StScftlA+K9g8laAlfPPW0KIE3RwEj8TKi8KI3RNw/UDpo9PUy2uQOij\nWzqGnWbr9p00G01Ozy/TDFp4XoxAZXRyC1HXKMfX2kRhiKYbiFaLOIrRBORDE0MvoGcsbC3Driuv\nx/M8ImJylopqmIReRG1pBcf1uPU1ryWJInoHU2y+yCSo1mm36iwtzbHp53dz3/e/z+W7tpJTTdw4\nYnhsFD1l41ebNBaWuWDzNtBUjt1T4JK//yTKqx3EBxpof/BPxPO/hiZTKLJjCZREMe16pxnL8X1S\nmTSKZjJ3roiqGAjTQBgdpkg9jshlsyixgm4YGLpFs+EiYg1NKCRxTBLGiEhB6AqmaaDqKuXqKumM\ngWHoHRf9QlfHUV5VCXwfO2PTO74ITwH/CPbCBF3dNS695NXEiSBJNDZM7ORV11+NodkkkdpBP5C8\n5MIihUB0mHqgKNgkSGHxrUdfYH13F5dbeULbZrnqYPetQw3rXN4zjnd6nr5sF267yVhvDlsJwO5i\npV6nMDpCo1mhr5BmenEOL47I2Dbb+voJrDT3fPcunCTgyssu4Uff+R6jfb302CZDPXkiM8vx5SXK\nB8+QTndRrpUJfB/dTNE73o+RNvizz3wEu5AhIsFQOihwJMSxRDd0Crqg5jg/W7D85DjPltwDPMvP\niMrTdBtFSizLYOcFu3Arg/ieQ6GQJZAJcauKbdlsv2IPhlAIGg1kFEIUohQyNNsgfQ87n8dtO+Rz\nPUzPnGPLUD9SU2g5Me1Kma6BIZLA60CDbImzWuP44RNs3LGN3u4c86dOkrPTECvYhsHpw4fwmk0m\nt25GegFZITACn6rvkrJtTp08QzaXp7xaoSeTY+nEFImi0mpkmHr8rWz6xmcRl4N1wQL9l/yI4rO3\nkgQRQsYoSae/RMqYKFGQms3hF45j6t2IdtKRlBtGh7cYRqyVSli5NOkU5LIGqhaRz5okvqQRNLBU\nMA0NkU2hZhWWlmdQNB9hZIl9iZUyabfbaLpJKpsha2fo3XyO9HQNViDcrZKdv47RyatptgK6+mwU\noXDyxAqmniWKkk7KJdSOIZ7yY1+ulxzzpUSRCoGQ1Bw45bm86bJdHD95DDOX59TcOfSwxU3j66gX\nG2ipbnqkgRLXMTSFuSjk8PQZUn0DBH7ASqWEaVoM5vOkdRNbKvhBROzH7Nl9MSI0Geofoy+TotGq\nsNY4h5cY3Pyq20hcBccps7R6kmJ5if7eNO/4gzu58dW3EtJxj9HOy206X+J8kSKB4toyQnl537D/\n8g6+ECIDfAt4r5Sy+ZPnzjNY/r+h8mQHjKkqECQufuQgm3Uqs7OYXovK3BS2GoNuoQYRedNCJCGF\nXAaBZGigHxVBu1gjZ9pkUimCIEJqOlGi0t0zxuDIMF5llbQSo4ctVpYXmTozxeU338C55UUWZmfQ\nEujKpGkWl3lm3xOkB/K4qY6JXhK7JLYg25NnoG+Y1fkVdNVACxLWrdvA4tIa5XaDY1NnaQQha4sb\niIwLOwSb/wHGyKGO2YKMUdHQhNrZX4lMao2Yg4fPEjTB9DQKsUlYbXVw16qOYZkYlkmz6bCysszS\n8jyIgDBqIaMGuhZi6CFCtkC0qdaXUC1QdaXDHdEV4jhGVVSymQwqHcjQ5iv2wV8BfwKVlQ0o8Siu\nv4FcPo+mqCASAi8hiXSSWAXU/8flE0knUKToeCnEcQwxRHGIhiR0fIbzPfRoOrt3bKPoCp6sV3lk\nbZlz2QKHpKA40MuDxRLHVlfoHRzGrTS4ZtseUobOYHcX3WYGI1Zo6hZrTsD1t74WRc/w3Ud+xOgV\nF6NMDnC0WEPLZhnr7WP+8BFEGLKydA5ThY1bJ/n+kw9w3S3X43kuHac6iZAKUSKIkQg6xLFvfet7\nCNUiES8/d/yXZhYhhE4nUL4kpXyR8vUzofI+8IH3n3+mcOmFW7j+ku3oA900my1W6jUy6yZYqZWw\nsl1oukGr2cDMd1Op1TFTNi3HRRcqXb15Wk2HNdfh6JkzDGzeycrCArXWMQr5HLV6myiUeMUSXqtC\n19AglUceYf7cLBs2baTYXOTRp55gaKCfdVs30dfdh5ZK49smYaPJFTfdjqupSD+gTx1lJJNn+egZ\n9EKWYhTh1BwUBUYmx7jomv3oy4c6SLwvgr9wKbouiCKd8lqRMILAjSFRacQaxaLDJCYUa0R6zNqp\nOcYme4liiaZq6Ok0edPCDzvrlFrDw03ACQWeUIiNNFZvH6GMiX0fM53GdRySWEdTBapiYWk2SAXT\nMBncOEturgwHIP6KwonPbSU9UCdlCAxTkCQBAg3dUnDcCClUYpl0UBKI82lYB/2NBHn+X7RUBOqL\npK60jowCBtaN8YOH7mdy+xYGuwpE7YRCLo9bLdEIQp5eOIfntxHorOvLs2f3hRTrK3SldELfx0k0\nBic2cnxqhl9461s5eew4zx84xJvvuIOllVVOn1nh9le/htlDx8maWZrCY2b+KD05m96dm3n/R/6W\nWI3RZAcCFUuIhYIUKhqSIEpQFHjXr72HRx8+iu8vIPgZ5S5CCAF8FjgupfzIT5z6mVB5f/4Xf9ax\nsUmgsnqaUHqUqhXsdBeGnSWoN0mkhqoKPBPM2ELJ5chYaQzL5IUnHmfXpm2UqmW6N25CJjFv7h3F\n0DS6xsfQc9vQvTaeTDh+4gTbdl8HxLSDADeMGe/tw1JNFqo1bthzCWrWYGl1iYGuHIUuGxHE1NdW\nWJ4/R6qQ4+Az+9m8fpyRkQlaq1WOvXCcqBWTUXVQoLfrIVJ8C24HPgzuBTuoPX8j0gxotQOCUFJa\nqpN4KtX5EqWVNgQQxG2GsJlHMOcY7CSLocQIEaKikEiBbmVI9BhVU9FCB02NsTQDkbIQho7rxUSq\nSdq00QwDGccIVenwSpIERVUQAtZf9Ry8E/gTmD6+C7N/Dw2pE6mgdArztJr1jsNk0kH/SdlJX+DH\nqZdQfvzZHQYlICCRsLxWZKB/kGKlyPrxCR556gl+7lW3cM+D97N+x1ami1W6+vtYPHWSO667hvnl\nFZpth/mVFbryBZZqLfpGh8n2DbDv0HFed8ebePaZZ1icX+Btv/hz7Nu/n7W1MrfecjPPHzzIStth\nV18/MmqhJZJdmzdxyd4LmX32AJuuvohYVdBEx6JVEzFq5KIkCoYwuPOX38nhQwt0DV5A0gjQYpfV\nyk/vw/+vpGFXAb8E3CCEOHD+cRsdVN7NQojTwKvOv0ZKeRx4EZV3Dz8FlaeITpqQyIRQCrI9A2hW\nhlQ6QwpJ1jLpMWxMQqhUaFaKxI0qiefg+Q5J7BF5bTJmmqTlEjs+od9CBg10EtRqhVa5xtGDh9ix\nbRsijjoOjnFESsZU5+dxy0U2TU6QTZkIP0QRHf66jkYcJfiOy+nDJzj53GH27tyFZVocOHiYUrVO\nFHUM5GQsWLdrma1774Jbgd+D2nWjNA69B7fq0lhp0Sy3EU2BHdk0Fl00tZuc0YVMVDzVwAh0Al1B\nzeXZf3gBD4VASsJEQ6DgRXBmdpWzqyWEUNGsFIZuEgtINAWpGAg9heN3FMZSKIRJx+XRskw0RWXv\nbZLMuVU4AMk7dIT7C/QMDTEyNoYSayR0sNnZXIo4jomS+CU+pKqqLz2EEMTnIUHi/K5klEiCpOOw\nousaRVyqpTJ53ebKPXuZPzdLureLA8eOMD4yxsT6jcSWydpaiXw2R4Ck6HnITI5rbnsDVTdmZm6R\nN77lzZw8cZLSWpHXv/71HD18hPLaKjffdANPP/EEQbPFza++HV9TqbZ8rtywiWEhcJ45TPnfHyRc\nrBEmMRAjIkFY81g6eZa5kyV++W138vSzR/CiCEfWcaIQX/0Z7VullE+8TFDd9FN+5m+Bv325zxV0\nzNGkKunpHiERksHxndTLCxjCw86YuNUGotHJiw1Vp1WrEiEp6D2Yho6ugx2EBEGAUASmpeMGPhkh\nOXn6JLqZYseOnbTaIa1mg6xt4TptDh86ysTERsIkYmV1tWPwoGmslerUKsdJElhbK9JsNunOZdix\neRsrzRKnTp2ikO9jqVKjUa2RSJXRySIX3f49lNdIeC2s/UKex77yZkTjFEO6RtoPOH78MK3VFuuH\nN1BfcSjX5vHVVEf6Ig1CJNm0QU7aLK5W2RT2YRogkg40NlEShkfHiYSC9D1IVGLVAEWhEXjUGy79\nw300/TqKZVKvlSkYOl7UkedUSyVSww/Dr9NZq5y9FHXWoVSdIyoMMThx+flrAjIJUVUFVZVESYyg\ns/HYUYl31OIIQUKH9KUpClJI5Hk7JVVonFqeZSxlEzkBvUYKNyegVkQIwfXXXcea2+aam29GFldp\neQF9kxP0j46RzxZ48NHH6SrkuezSvTz79HNIdK659hp++MO7URR44xvfwHe/9z1MM8W2jet54qGH\nKTttJvqGGR4fYnlxjiNnjrF7eAzx2a9yyV++l1BKFN/lhUee5u677+Ox/dO0/H6M7MVo0sCTYHVf\njCJd4KeXj18xbVgsI+R55bGqpxHnccvnpk4j2pWXdthbtToIGFs/ydryCmosSeey9Hb3UGs5YIBq\n25RXizjtkNHBIQ4c3sf2q65GNyyidpNsWiXf30NxYZlMNsPNr3sNzVCQVgRuq0G6K0ukSHrdgKcf\nfYZmtaMIGFk3SuI3MKyA2lSZrnQX9YZDtVVDaBojIy3u+PV7MN4ewyZo/KHNAx+6hbhdJlvTyMkc\nJ/fvQxgxfbkRTp1aJYo1RpUUwtAoOip11cJN2pgthWNJif7Q5Mnnprji4u3kiFEExJGKqgoizyVI\nIgSCRO34HcfEHH70WVAU8uO9DIxOYHUV8IVPCkGj5DG5e4HMXKkzq3xNY+6eKxnq7Sao1EkUg1jr\nLEIkktlz8x30ndBJhOTF2o1UgESS0AkMCQhF4MdRh8gmxUu4h7m5WcYnd1Fqt8iG5z2Pg5ih7l6+\n95270HvzpNI2r3vHO5iZPkdXbw+zMzN8/+7vsWnjdnZesJP777+fnt5+brnldr72ta+ASHjXu9/N\nxz72EQYGBtm+fTv3/+hButN97L32RqK5ObSGz+m1Ena2i7NBi3WrJTSpQgSf+Pjf8MPvHmF2dQlF\nSHx/kciIsMxJQukhzRQq/5u2Fb/zDbeRSlv0FHJs2zLJpZfspbu7hw39XcSOhqLpmLpBPfRJPJ/u\nbAE7khgJqIpKu1gFpYFumVT9BUY2TlJTHZZOHWf7hvXExTVUM4XXarJWLNJsNdk6uY7a4iKyUWJu\naY3erh7USFKZC9HMLM/vP0bT9bEyOUwrYcf4MLXFmKUjx9E9l1y+n1KxjhGr5PN1bnvH97B+vwMZ\n8j9q8PCHbmXIGUet1CgsOjRas7hqhtVikeryHLI3TbNRww4EbhCg927h2re8nZnPfIic59BQQjbp\nDu1mD8+cK7N3JE3aTiNJiMIQqZnEkYKQBrHQUbA6TvGBpKCpNGaKyAYsNWtceeO1tBQPU9N5yxun\n4DeAP4GVc9ewUjcZGummZ6tC3/h2dNROAMoOB1LTNOJIoiKIOw7kyPPBIM4v5BUhSOKx3sQsAAAg\nAElEQVQEXdNfYrEoQJgkHJw6SVBqIgs56mtFNA0UIbhw+wU8/OiTfPyPP8Pj+57mwfseRNF0jhw9\nSq1W44YbbmJ8YgOf/exnmJiY5NZbb+ff/u2bJAm84Q1v4KMf/RiZTIYLLriAe++9l6GhUTaOruex\nh+7l+okxqo5LuS0Z2bkXWUjRLk0hkzpRW+Odv/fHPPzkezGLc2iiiaL4543h+85T3Wbh/0X69YoF\nyx0X7MCybSY2jLJu6xiamSYiJnbrKFKiSYGMEvRI4rkuiWGgBj6oKr7rYVkqXsshY2qk4oRz+w/S\nPzTO5slJQq9JkkjcsMH82Rn6+wfI2RbtdotcNo/jurQaLQbHJkipJivLi5w6fpRUKo8S1Ni6aRNJ\nWMaNfBYqZXZs2oJqZXjwkcdwIugetLntV79N7sMOHIP4XpVn/uZqjnz3LBMjMet7Bnl8+TRrtRb6\nyDhzmRRFL0BJoKFKdENFD5pIXSH2BW1LkG/HTGh55qI2WwiZqTQ4kcoyhoOp6qhJBzkXQcdkW9GJ\nUfENg1qSkFdNlABEs8GQafH0w0/ROz7AL/zWIPn5tU4F7Ksac3ftxWsUmTseUGs4VJoKW6/ciBQd\n+nCtVkMmEnHe6ERIOr9R0wjDTrXoxRkEeGntImUnS1BUQSsJuWjPRUhUMpPbKTVKRGGA6oRMDvYz\ndfI4V15+JfvYR6lSY9u2XfT09BCGIR/+x49SyPfy+te9ia9//WvEccyNN97IXXfdhWVZ3HHHHXz9\n619nYmIdg4NjPPjgo6SHujDzQyylCgzvuYo4nSGzZQtdu/aiaSYrM7P0XjTBn/7F7/LeX/8t2l4d\nRUl39pHUFIH0ySQgxMu7u7xiweLIFtfdcB0pVSMlNap1j0RG5Af6cX0XNIEaSbqyBZ4/eRpdMWm7\nIb70SaKIxPNxQ59Fp83c/Cz5TB57cKxTVbFsZOjjmiZDOy9CNxRknCCjCCdy6NuwkdzmbUip8vwT\nz3B2fg6hmCzMzHDhzi14YYMrrriMttNkfPt20Ew+99F/QTMNtl4wwbar/4HubxThLkgehUf+jwuZ\nu6vGqGVSWi4zVW/Q8kM27dxO2fHQtBzDsUekG7gDPSRRgromqa9M8/gXP8hI4uInMbElWAwF6XaT\ngZbGVKmCiNMMFCClSUQiiBWNRFeJhURHcP8zz5HTO/a2s3pE2tFYjHwcVaF+donxrcfg/cD7oF19\nFcNWP0ZBQ7M0Km2XQqHvpYpXIhOOHD2IECqJ6CRmqB2b1DiOX+IyvhgoLy7wX9pmUzrl5CCOGRgd\nZn7fcWpdGfKqhSsVlktFBgeHmZqeYa5cZtuW7fQOODQaDR597BGmpqbIZtO8+efeyDf//RsYpsbl\nV13O3ffdg5nOcNnll/CFr32J8bFJBkfHePzhx7nw0t0MbtjE8/uO09JCurvGGLB1KuU1ThTKXCSv\nINOdRySwfnKMwG9hSgtPkwh8QstAhMso+CSx+bL37CsWLFe+4c0kuQJtJaKe+LhKQhwEiMjvkH09\nh6DeJD+WZeOlexASJjeME4YxfhjiNxtYiULgh+zafREoCrplkXcD1uYX6evqp7s3T743z/OPPs7G\n4RH8OCCbTtGu1qnX6hw/cgbf88kbKWKhsuI2CZwadrfBUw89RCqVoVxvc+TISax0mnx3iok9/8Tw\nc0sdf+Mn4OlP7KD6zBjjG03OVFYY376N6SceJ+VFlI+cAkMQpBVSQkf3QrqqNbJ6ghdEZGWMTCJC\nIVHR8ByXfDaHSsCIkWPB91lsakQ69KgKWd1Cmp1d80QRSF3n2NQZdig99NpZjrVaVDUIFa3T45M2\ncbwV6AWehNSvHWCl93oyI5PkDYPRzSbSHkXoOgiJIgVTZ2bwg4BE6bDt45dQED+eTeAnysgvHZeA\n6MhiAC8OGUjnmWs3iZMExVDpKnRx+MxJLrzhWrpHhpiZmmHm7FmEImi327z2jtcyMDDEpz/9abLZ\nLNdffz13ffvbjI6OsWXrNr77vbtZt24bGzZs5IEHHuKK629AqoJvf+eH7L7oWq654ip6sxb3/eDf\nefwHj7HvuYd5281v4Mv3fRs/ibn7+w+AkpCEDhEhigJqoqLFgCpBvHw/yysWLD2pHmQsyJgp2l4T\n29awewvEjRaqZRGaCWlbEi4tIQXIKMY3TcKWi5bKMHd2jnRXF919fYR+AEhmz86B0JgYG6dT24ko\nLy2RVRVEu0U6axI6DstLq0xPzRIEkCSysw8hE5IoYMO6dYRBi/7+IVZXSkydOocQMRffcIZdVz5J\n+p42vBd4EKbuu4Du0zeyqs7xwrlZjPFhHjlwgGwYktVUmpU1+rp72FCUzClN1IFuIjXiXBgSZlRM\noSEaAaro6K90BMPrN7A6fQKttUZcC2gN9WH5FiEJkRliaAamahBJgxemZxCoxGqI7kR0SZ1V4dNv\n91DxHFKFHB/7024++c1l7NeC9qEiI3d+hpOP/Bptx0WYFvmxDKncMImUqEKwfsP6ju4L/sMs8uJ4\nETj04vkXg+XF3hYFga5rLK4t0QukbJu218ZtNcmk0gyPjvCpT32Kd/3u72DbNtddd22HLADs37+f\nL37xy1xzzTWMjo7ywx/+kAt3XcDExCRf+upXueDCCxgeHOH+Bx7m2muvp+o1eP6Z4/zcz72dnv5e\n9j/9IM88fR9Z3Sdv6Hz+X7/J+t17qFUbKJbFR/7xX4gxiBXwQgXz/N6QjENiofGf6H3/w3jFgsWw\n6NBsMcCDVCFFw6mTVgSxmqDEKlYmhbfawDQ1QhGTRJJmaYWlaoNdl1+Dls7iNRpops7c3CwDvf3Y\nqQJSxjiBi+IIMn0FGqGkSzepl+ucOTlNreoiYxUvcYmTGN/z0XSLTFcvDz/2JKap0Wy3EIqFXajx\n6++9j/yTbbiBDgzmu/DcwhaWv7uF0U0FujRJoWDTs3UrU/es0GMVqDkN9JEBzjXqWNik0fHKLqqV\nYcppMqllGOzvoRwtkvgdRomuKSwVy6w1Kqwnzc5YZ1kTuH6IamjUYo9UpKOonT/j8UPHsbDo7ulC\ndy2G2zbH4gau5yFMg7DSYGnF5u/eN8Rf3LWMcgWkt5+ld8tXmXn0ZjZsHOzo8JTzFS8k5bUqqqIS\nJJ0SMT8RGPBj4YsEOC956WDIOu9NpEREMU8cOsC29AAnl+bJ5TNErkscS4Sm4LRbfOZTn+Y3f/NO\nTp06jeM6FItFFEXh3e9+N/f88IdMTZ3hjte+Br/t8d3vfIfbb7kNuzvHt7/zba6++nKkjDi2/xRv\neO0bSPw6n//0Z1DcOt1ZnbYjaDcT/v5Df8ftN9/Ev3zqS+y98Tomd17MmVOHCL0UURgRxy2EUAjV\nkChWEDLg5cYrFiytlQpKymSmWEG3LBrLawRJQCPyCaIARdNplUvIJKHZbGDYNqXVNSyzwNj6HURG\nDtdPKJUaZLqyrL/yMlRNZ21uhcrKKut27CCSLpl8F5f13UK72ebYjx7EyvdhJm1azTYiAl1EdPUW\n2LFlI4MDvcwur1Kr11laWMYPAy6/fT/5j7fh34H3Q+sGmxMPXEr17w3GbryY+WYD4QcUhgZxlZjR\nnl6GXJdcX55y4IFhs9huMNtymRxZj1Gu0KsZVOst+nWLQV1hMdCpihCz7eMtLjCZHkBrugzJCHXN\n50xBEHfnSccagepRiyVrtYiCaeN7LaJQ0FZjhrB5wpQMKx2sqKlapHIZHv9+yCc3dvNb36nArTBx\n3wms23YRt6+kNzdJU6aQUqAqOk89/SxC15F++NI6BTifXHXSsUTyHypHLzLvk+Q8IFYRrK4t8/ZX\nX48iwEinySoqrufRCB3ymTQz5VUeffRRxteNQRyQhB679+zhk//8aV57xx2s27CBtbVV7r77h1x6\n1VUU+vv46le/yt5LriCd6eOe+x/j9lffzpMPfZ+12ZNcsHUSJ8kyPb+MLztlbUPCSqnM8MYNPPnE\nI3zgb+7kY//X53j4R1Ooio6m1UkCEzV3KbEekk4CYP6n3rOvWLBkN45h93czIAW4IdgW0mvjB52F\nvqLphJ6HVg9YW1qiWCly6ZWXUUhnUaSCE1SZm59jdGKCvGXhTE/RavmkBoa5cPM2VuYW6Rnqo7Ww\nypEXjtCs1nGCAEPTUaVAkZI4dBgdGiCfzeGHDifPnmFufoUw7jjQxzJhYGwB7gU+Ac+Hl3PsY1cz\n/9hhrhro48DxwyRnl1DCADkyxOH9yyjLRab9NhGC8cExUsLER6V/cARfxmhRzKiZ4ly/zczaCutS\nGUQUoEgPRTNJgohUf5pivc6wbaA1q6QKXSTZHubdANsHGcScWVxGU3QswyLnJhQby2y74lauHR5i\n8cn7sWwNLdHp7ulmvlbkK/+UZsdOnes+uQpvgP6n/40DZ1ys1PsQqa5OKCQJlmniRTFSSlS143aS\nnF+fvJSC/WSgwH9I1ZTzJeiW51Lo7yE66pM0NBqeQ2BEREC97pC4Hs8//ihra1sxbIt8Pk+pUed/\n/OHv8ZGPfZTJDRvwvICbbrkF3bT48te+xqWXXsmWjZv4t29/i6uvupDvfOEjaBpcefXlnDx1grVq\nHamoRFJCGGJqKs/v28ftr70dL0yolUt86APv452//DscfP40vuIi1EFCUSOvdWHp/2mD5E98t1do\n6HFCPL1I6+gZKgvzVGemSRptkuUKqVaAXm1hVFvUK0WKpVV2XrCTXCaPZRksLM/TbtTZvWsnlq7h\nNOqogcf4wCC2blOv1zEUlcSNOH7gGKViGdePUBQLL+g4RAZhx4tYNU0a7Trz8wtMnZmj3XZwHL9j\nYCBa9HaV4AgkF8GTd28imS2zox6SOrvCld1DXB6mGWmHbIk1RosOF4YaN2j9bBUWq8vznCrN4SoK\nvesn6enpxQlbCNEhFDtArGpsuv71FC6+GRSbLkXH1CwKo6PM+T69bVjnC/pDAyVQqLRjQqkSxwlR\nnNClqAyagoHRYWaPHeKmdZuorK2yOrdAJARqIYsTRIz0DPDxP+7n8EQW3g3iTXDh1fcijemXdGRR\nHFMtlYFOBpYkHZ69/E8W+VJ2OJDi/PH4vE+wKjqCy0CRpPM5No5OUMjaxLqOIj2ErJEfNFDTAt8K\n8Yhp1JuUKhVWSyVOnjzNn7zvz7jlttu59rrraDQbPPrYY2xav4Err7qUb37761SK83z3W1+kv8fG\nsgye2Pc8S3WXQOh4cUwUhSSE+FGIoen89Z//FRddsINatYaiKHzuXz/EYC8Yaoxpg5oU8etneMsb\nL3rZe/YVCxbVEDS9Jrahk0sU7BiCtQqGIqm3q7T8BnML50BV2LprJ8uLy0gEczMLjI1O0p3rZenc\nCvVqm3LdZbZY48CJUyRC4IQ+1XqV559/ntmFeSr1BiuVKsVmg5LThJTOwPoxpJWh5oU0Y4Ebi04H\nYNIxqEtn06zf0UbZD+yAarOXLZs20hO67I01+ppttDgk39tPkksj0wpJ5GK6HlbYZMzW2aVo7PEU\nml7IwsoyCytr5Hp6qCQ+aSdm+9ZtVJwW6uhujOvew5LjkQMaK0VUoVAHllMq08Uih44dR9i9uInA\n8UNiVMIkgThkSDPRLYv1TouD//RRBkWCranU6w2efOEgmp0iCiL8us/n/uJSVn9FhQ2g3hlSGHkf\nitIAEoSi0Xaa52eQn7hY58nQL6ZbnUMvFgHil6piL739fLXu0aefIG2a6EpETz7NrvFB3nPz1bxt\n907ecf0NXL91F06lTdvzMHWLHTt2MNQ7yrNPH+LA8yd49JGnOHzwEN25Am996y/w+S98ktnZ4wRu\nA1s1WGqGrHkxrqoRCIWW53VmFTrEZCFUFEWhN1fgV9/+diZGR3nyqWfRUlk+8YVPIc0MMrKwZYwa\n1rnppr0ve8++YsHy/NP7OHz4GE899xxP7X+B/ceO8sLx4zx38DjTZ1c5fWyWdO8o5PtotkLarQjX\nCRlYvxlfgjnQjT3Ui6pp9PQPsnnzTiY3byZ22ihuk5wpaJdLhI02MozpzufoTqfIahopFdxaGaRL\nFLh47TZe26HdqKMlESk1IQxcBkeWOh2FV0J9eZRWbY3CUC+psQ0E6RzVcpOVdgUqdeIoIdYUlC4b\nK5Mim89jJgqJDuPSY53QiJttHMVAzw4wtGMTU4vn0IRgcf+DyBOPkbFtotijJOssN6uUc2kOiQij\np4eB7gwZUURTYxwnRJUKhRgsoeC0A3r0LkpCIZ94bBIpUpHKOlWCVNm5cTetpouIE1bnPf7pv12I\n91EBU6D+3RL57DtBhigiRtP+F3PvHWXZWZ15/973xHturpw6R6kldYtWzmoFMKAGkYwMzmBg7DEG\n883YHnscZuzBJjiQwUnYBgmwMEpGEkLIVkChgzp3V3dXdVV1hVtVN9+Tz3m/P251Sx6PNesz3yzN\nWavWqnvuunfdqvXuu/d+9rOfR19BueiSKKW2whlLEQpUkiKUQKguUCy1eEXUr5ttEpWS0oXE/+7R\nb9Mz1o9pw3DO45bL1pKzfQrFhHVFxa51A7x352Z2rRvk1vWjDM2d5eYNq9h9w/UkQUSr5bJ+64WM\nrBnmt379w5w++hK6UigBHgmNSBEiCVyP2O9gGRpSKDRpYZo5nIKNZVskKiWfyfHB930AyzT43Be+\nxJ6jx/j4H/4eod+lUxXKBartV2/wXzN1l/r0fgxDx281MVVC2PIIiFheWoZEccG2bdRrNQI/otFo\nMDo62i0V/ADDMhGmQbPeIGsZBK6HiFOk6or2LdbaHDo0Thy6xAoaQUBpcAAVgopjEIo0SbuoW1c3\nlcD1aTRqmJaB1AW2ZnLLT97H2Mcm4D3wbOYNFMav4/h3nmJ7VEQszLG0bgC7ETE7NUF++zZe3LeX\n9UG3NGkUTXwvwpOSZ/WQzYMbmW/UmWxXKBXK+HGAEgk9vqJgaMi2C1FEU0s4Y2hoZoGBLVtJpyeQ\n7TYyjUFP6RkcZT4Oqfg+se9jeS2uLQwj9AzNRpueNCESiuNJwMCG9eydX2DQyFIoGKT1Nr4b0mPn\nueJ2l/f+9iHk1cBnIL7jE6jkA1z5ustYrHqk6St4YSu9ystlmESpmCSNu573wkIRIdC6KwFSEkuF\nJTWsGC5d18tvvuUGCsqknYCmQRiBIS3SpMW9B8eZXgxZkzrE0uHOX/gQzsgAX/zrL3NyZorTp8ZJ\n0rA7PBUaqRJI3SRB6y6cKYUQiiiKMU0Dx3GQUpAkASRpl4irGyBg9ab1XHXtNSzXUz760Z/mr/7y\n77nnr/+c9/z4rXR8lz/45Bf+/eou/6cufXoeI5dFy+g0GnXKpsPcmSlqbpvLd1xOdaFCZaFCz8Ag\nY2NjBEFAEscrBzvG0vMsLC8yNjRErlzGa7dpNVucPHqEZjui4ackKYRJCEowO7eEplIM08Z1g64V\ntUgg7fKeVJoiNJ0gTknjGFe59I/NdDPL56C8Zyf1ZkqY7+HQyXkylmIWH3CpWwlm3OaoneKlMesS\njbZSGMrA7kTEdkipWGR6epI1WYeF2UUGizmsYpZ8T5H6zByZWEcJG4yYvsEicaqTkHRp42HAKimx\nQ4U3O48sZ1FpgkhBGxvlmfkGPXqEHccMWTmsyCeHwG97DBqCTHMJApOsk0PXE3wV8dTTQxh/P81d\nv9OAb4DYvQepaQwPj1BvnsEPwvPI8bkgOUdtEXTLNEM3yTgGKtVRRMRRiu/7pEhIUmIFIYoXzsxw\nshNz+XCZchIjBEQdH9NOCWuKvMgxH9U42pzg4rUX8I5f/hne/M638MzzTxIFCUolKOhuMmomCI0I\nSRwG6LogCWPSNMayLDKZruNwFIfINEYq0IREJQkIyenx06Rpwsj6HXzuK1/mY7/0K7zw1HfZvfst\nfOKPP/nqZ/b/eFT8G5dWskmkwA07ZDSdEwePoDIGWSvD7Owcru9jZrM03Q5n5xcQmkTTJUvLS7Ta\nbUbXrMXMOEw0WoxpDi035qUDJ3l+/z4u3Xkpjp6lKTRklMHBwUhZkQcyMTNxd58mdKkuzBGFMXam\nK5sjTQMhBP1jbazpCByIenIcf2aJTRdfy9MPPsZOzaR3aJQr33ob3mSFxalJ/L4epjXB8qET+J06\nkZGnXa9TckpkE52Z1hLlYo4kjuloMYFKaLSaSK8FkYchbAQdcipDve0SxDFLrWW0NGFEghP7WKli\nUUuIlIORagRCMbTpQp6ZfZaBzRvxDp8kSlK0KKKYM5iZX0A3FNbKvklISqLD2bDFyPBmxo9m4bYG\neAA+y0vLHDt6tLtHI1bWIekGi9Qkw8PDCAFnz851SWMi7cLFSYTUFGoFYBZKoqUKiSRVKYGe5z99\n6V4+/5GfYV0mRQJSS/E6Lkr3eeeOLVy3wWCyPs/vP/wIdjHDc49/H8sU+LGHNEzCVENJA01aKAQq\n6fqIpqqrG21ZDsVinlq9jiZFN+PRHbSKtFu6KaUYXjWG1BxmJo7Sahd5/PGn+aNP/h6Tx4/y0Q//\nCl/9+v3/5pl9zYLlhyeOsW39ZuxYY2l5mQt27sQjJhYC3XJIgoT6UpVGa46Lb7iaqFKntjxLwRnG\ncfIQpfhtj3y5zKmjx5mZPovKGbzjP76bU7NTLJ2ushwuMWAMIoMElYCuKZI4IEWR6hq+nxAlPmmS\n4HsJmqbheQmWqbNmQ+V8vzJ3qpdqM+X7f/cQ5cWAvoEyZilDy29QPXKSilclCOusWb8Wv+3T2x4i\n1BX+2Bqe3HeARcsiKxJEHNNUAdtHVrNQr1JPwekk5JRAJREaGo6hs+w2MfIl9EhDpG0SobEUxtha\nhobUiE2bRIQkSvHCdx/j9mtu5rFnn+baiy5BOzlHIi2M0iaaHMNIFXUV4UcJ2zds5qXnnqZ3aJR6\nfZGeKA8O4IIXdHj08RdII0GMD5hwLqugMEyd5doSuq6jGTpxrBBaCnqKUAopdUAghUCQoIQiJUGT\nAtIYz3T4yKfuZt2gxU+9882U05RhpUgzDs3GIrpdpt5uMTo8yHSnARHUgpBEpCSpBMNB0v0iU0kC\nKkEIiaWbWLqGLgVLy/PkslmiwMM8v82pMAwbP4lYs2EduVwOJ+tw+OA4laUF/qb2FT78/g/Q8nwy\nnVc/s69ZsFx2xQ7OPHeAkUKZgqE4tf8FPN+jMDSAYVi49RamoZPXE6Tv4S8vklMRTr6HpcoybhAh\nUsHx4wepRx5WPs9CZ47D33+Esws1dFlApj4in8HMZNAMhUKjo8Ucq0zik5DpgJEG6KmGiGOUiEji\nCDO2yQ9MwkPANXD65AjaQottpsWAErw4cZANi3mKZkJ09CShDKlGIWptm+kzp9DNPHpGRwiLqzds\n4/Fmg1nDBi+inDWZn6lQtxQzOUna9hiTBpbUMOIYqULypTKtBGwRkWoJU4FL3jBZVehl1UiJuaBD\nkArCJCEnNLz5BTbkexifmWRV6MLazWz97U/ztZ+9kTXFMmOlHoglZ6tVMk6RHrtEppzHap+BDOBC\nELpMzzWJ0rSrMilWuF5JsgIjy/Obkud6mXN0/jQ9l2G68PG5HucccoZKSZSkYRXYV005+pWHkF6d\ngWKG3lKRN197I6uMhMG+HhrP76HabiHtEh2lo2s26DZCGgip0xWeAKlinIxBLmvjtpqEvo9jGlh0\nuZ8qDhFKR9N0UgkXbN2GlbEJgoADLx3A8wJUPWDf1Cmcj/wSgxsu5+6//e6rntnXLFiWJk8yONpP\n2HJpVltki0VWb9xIrdNCpQnFgT6iOETTU+J6jUwxS2epw+mzC8wtLOAGMfOzi7RDj0AI3CShlTYo\n9BZRkWBka4npIzO4RsyZ1gJSxgil8LSIZbdKYmnMNT02j43Rp/cQtTyMjI4fBhhSY92WRjezfAD6\n67tYXjhEdk2GUn+ZM9UOc+2QUhBTzGYR+TzxUpVcrsjJ0ENTFkHQQERZZmstVtsmSXEj4VrJ8swp\ndMegd3iA2XqFgTVrmZ+ZJO/kSSOPQpSQsWzSyEBzl4nMAp6pM9pXJm4soYwSeWnRjjxSLUVKRbtV\nIwl8jLLAlRLR8LkkG7FzZAN+2CaOPZSf4C17ZHSYmTrJhtLFtBuqm1k8kJpLtbGAkCkGGRKSLny8\nklm6Oy3y/HwFJFJ20bFusKgVe/V0RU6sO9HXNK1rjZd6CF0hE4hj0Ap9TMYpzY5G2xnimJdyevoM\n61avZvHMJGGsIS0HodtdpftIgRIomaJpXd8XXU9pNqpIIrKOQeInhJ0WhiHJWBkCP0KzLNZs3oKR\nsanMz3P27Bxh4JEAeZGy66ab2L/3ea697kru++qXXvXMvmbQcauZ8MILL3BgYoLxZp2jy8u8ODvN\nVOhzarFFVVi4fSW0tZvwCmVEoUxkd+G9jF1mvuPS6rFYNl1OhcskeZN8uY9TM2fpHevhxMxJFIKg\nUUdTAUGnTZp0mFuYYvHsNNs3bmCpUWeuVWX/wjgzUZWWiHHjgHwhoCiqMAnpNo2kM4bpdUGD03rA\nruF1LJiKHxwYZ7bTJPRcCoYJekh2oJeKCoikIHA9NMuglCS0jxxkuT7PyMAqyoP9JColqjXRUoEx\nUkYZGa74zF9RyfRhz7kkwqMw2IdMJTdccytnGw2SKEZHI5fJk9VzZHxIhMLPm2R0iUxjGrqkp13n\n/h9/J72VOVqtJVSQYKQpsd8m9FysUoE4V0A5fefLMClc2s1uHaIQKCkRmkTJrkCFpmlIKVAqAWKE\neHk4qes6UnZdy4QQXSES0ZWOPUfGlMIgjWISkRDLhCiI0WJFHEb4aUJs2bQ0g/XbLiZnFQiiBENo\nmOgoZWAKSUYXZHQNR9PQkxi/3UYkKWkQE3o+SRpi2Dq5bAY39LH7Brhgx6WkwMnDxzgzNU07cPHC\nABuJLiV3vvUtPP/iS5w4dQZT/F+6KUkUsvOyK3DKZTw3xEjBVyEDm7ZSH58gSSSWAq/SIUybHDkx\nTuqH1LwOSwsV6iKinQFRsOkEdSqdCloEzWab+T37Wbd5PZqSWJZFGkUEvk8ninVmP2gAACAASURB\nVNh88YW0jvucmjzOdddfwpnTk0hp4qUerdYSWc2gd/hUV970Cliqj/DSvjMMjg1Ttw3KF1/CWs3i\nhrEiD56a4FDVZTgyyHspxmyFVfk+3OWzOAnIJMUulljyO+TCGF+IrkaairD7imRsEy9UxEYZrzjK\nfGE9V/3UXTz6F18gXKyxfugCnvPP0pidQTgFGm1Jbz4HQsc0ffKaiTIFk4vLFI0SQ6aO8udQpo6h\nm+Q0gS40MDTSWGHnshSFIJ/ro3Z8nGKvfz5YNOkipUGagJIvy8DJlUZfrcCzL18KIbp9yrmSq1ui\npYj/CXiVgEol59deVhgDADfdcAN9AwOMDo4hDItHv/c4qwZGyGQ6LNeapJqDFBLDshBJB1K3K7RI\njKZJAs/H0rq9Usax0TSNlu/ROzLG6g1bmZqaYurMma4IvEqRKqXftlASzs5Nsmb9CG9565v49J98\ngSB+dUXK1yyzrF8zigTcxRpxrUZzbo6wXieqVHDdJrHuEzTbLM9XOHXsJLXlJpWlGpXmIktJi6rX\nYGG5wsTZBQIvpO41cGOPZr1Jp+Ezf3IBGaZ0Gh3qtSb1dodKrcFz+/eiZQze9LbdNFoVDEsnTWM0\nXXH7rmsxtZTtO314GrgWqktrye2fY/OFm4mFQpRLPPrM06hDRxmRivzqYRbTFDVSwjctypvWMW3r\nzMURS3HAYuoShhFODGkYkxnsQaUJKQJTN5Cxy1C7RaNylNHThzh87z8wnKTYmqSlJfRlDDpnTpI0\nXUbWbaOTCDoqwU8j2ipEapLUV9RFyrrRzahEQyiIkogMEpQgSCIaUYu03SGptcgqgRX4pLE437Po\n0mO5XkOTAl3vcsLOy7WqlzcIX6nqoq1knHPZw7KsrmPbK3ZdpHz5sZTdH1S3sDNMk8suv4KMafPc\nC3tYWFziJ3/iPXzw/e9HVxFB6IEE01ToeoSKG5C00WVAGPmEoY8SCVKHQiGHIqHd9igPjaJncuzZ\nu4fxEye6mgKkyDih13ZYNzpKvVXj3e/9Ce657z7Wb1rP1MTU/26r+LXLLM3GMpq0qdbbECf09ZZZ\nWlpi7uxZ5GCBtGwzfWqOg0cOUllcRAFBEDDbqDHfqiJti56+fmq1Fl670f02jCOiOIVEUsr1YCQm\nIoqIUoGQOkmgWKxVKOtZHn3oASoVl/6eXrykw5YL17E4N8mmoV6Kvd/tBsuvwcETFr0L85x1fWbn\np9ASn6hS5YZsiaEwxtw6zMGpBeKZChMlk4t6VqEGxsm3AzKBT7NooXSDwTTDvB5hDZTJLNeo1ZYx\nTBNXS+gPwZYh87//G+gyJPQ7ZG0TreUhUp0eBFYhz/yp49iD/eilDLFQUMzSXKpywcAIZ2pLnFia\nor+3h9T30VLQIoVuKtIVSaNcrNDDEL/VwtFs5oLay5lF82g1GkjdIE67Si5qZXj7ykwhVrhfauXA\nn8sqSdIVretKvP7LfRgpX7lR+TKXzPM8+gYGmTgyzro165ivLrPvxb3UWg02blxP70AvL+0/ht3f\nS+RVsWSK1B08zyOKU1JiTNOkXCrQ7rRRcYLpZKg3XOrNCkngEa2QQkGR1QS5osPeicM0g5R3vP3d\nPPXUP/Pg/d/H0jMkvPry12uWWcL+tcwsNxjbso7NN11L2pNh/euvZPTai7FX5UmMAHuNzZYbL6a8\ndZBf+fh/Zviqtex6600MO2W0jmJ+fo6lSoXGfI2wE5PEgiBIsHMlEqGYixqMN2ZpRQ2Eimh3OoyM\njiJSi+efOUbchsWFGko3OHTqOEZ/L4enxxnorcAe4GoY35+jd6lB9ugUNzYk6zsR/bqJ3vawZULj\nif2svmUnfUWTDUJw9qEHGK4sstSpM9Xx8ZYj9EKBlpmS13VaUUrQU0aaOraRo9QA31CkUUrQcSm2\nOsxLRTnNopsSgpCCYaDqVQZNh1KuiNsO8L2QNNUIMhbj9Vmu0Er0Tc8T5mzMKCUQMSEhUzLtGp6K\nlFgltA2D05HH3qhJZTLoZhYPNFxKWafrE0myIkAh0YREItC1rt4xSp1fQxZ0AyCO45XASLDsrtyr\nUglCvAIRY4U+o+kIoaGURhDFzM0voUSC57U4evQopd4yd9zxRq67/Aqk5zNSyqC5dWzDIEKj2qgT\nJjFxmpCRgpxlUKs3afkRgZah1gxZWlzC7bQJ4nOfM6Ug4MLNG5lZXiJKNOrNZZycze7db+Tur/49\nkXDx+b/UU1KGDaZnTtGX0/jB/Q8Q9OdIJkzqHZflyjJ9g12r59mjC2iWxncf/w7DY0VOn55h7ZYh\nvMOn6FSbZJ0Sy60OmiYgihkuDaCkoFWpEHsBbTcm0EMypka908Br+vzlX/8Fb3/nO/H8BqVckdGx\nQSYnjnH1bTch7YNoBxSshxk3g1qQrB9ZxfzJCbzeIka5n9bqtaQLiwyXcljNGpN7j1MUJt78IiVl\nUkVjQShqvXlealUpzjTIK+gIyZm5BbZvvIiFVpswk2OREBkpxoDUkCSpRlMKWmlEmIYsixA76OBn\nTYr9ZTqNGmm7hu5k6MTdnRMj0Znwa1w2tgE1MkRcrZEJLE6qEF2zmGt0KMcJPf1DOJbNxi0Xcnz2\nFKv7HRIm0KRCJorFyjRSSgxdEEcKeb5Hkf+Chv8yXX9lYCk19FcIWkgJ6fnK7V9uW57brjRNiedF\n+H6KZWaJfY877riDam2ORx99mIWFBW66+Qbu+/vvML+4gJNz6PgB0F0atCwD3dTxfB83iImUxA+6\nWtFx3JWLUqKbDaRSjI6NcvTUKRJdcmZihge+9tfc/tY7+ME/7UHZo0hrEBV7wOy/fWb/fzv9/x8v\nt9Egly/z/N6XWOhUGZ+d4tvffpgXnt6HEhnuvec7zE4tcfTEGQ4dHedv7/kWLTdi1dqN9GwY5Pa3\nv55Y02g16pg5i0QFZB2DKGyhpSE7b7yEN77jRmICGq0Wy/UGw2MDDI/1c/z0US68eAuf/OPfxg/q\nJKnP4Ogov/2pP8BLnjnfr0zMDzMUaCT1DkY5T1LM0p5bZK5ZpV5t0Jmv0u+lBDNLyA2rOKWHHOpN\nedHyOaQ6hKUsGy68iHJ/P5uHRrl89Tp2rBrFDH1K/aM0/YCeTRspjKxCIYiJuz2GrrNspvRsWkfP\n6jUUevsYXr2WzNAgKuxQbLUx4hRPSlI0AlJm0xC5XCc8coqqbRHuuhlj+3bsUGLkMvT0j+AjqNVr\nTI0fw12oMD4/ReCKlxExPWSgp4+c5aBrEhCvYBmrFURMvtyz6GLlua5NxjkErNu7vFx6nVO2lOdR\ntS6sbJoWqK7Nn+8HVKvLfP/xf8IyHW64fhcTp6a56srryOdLdNohUujouoZhGCgl8KOEjhcSq64I\n4LmVgvOfL+2Oixw7w0KtRjOMOHj4MEePjrNqcJAXnn+OW267DS9uoJsGwvde9cy+ZsEyfmicE8cn\nmVuoM1VrMF93ydt9zIwv8MT3nmFsbAMvPH+AltvBMB1I8zzy8PMcOHiMu+/7FntPnqKWKJqNBhnH\nYmikj49+9MPYJVi7ZZAd127lzXft4qY3XgkiwpQBZytnkZbgyWeeICVk/OQhNmxcw9LSAs1WjaXF\nGhevX4QngBvg+cMmuVgi0pSznYD8mlGCpQp9vUUm9JRD0TInHZ+jGcUPY5ewUGSkZ4TbdlzD69Zu\n5uL1FzDQ24fX7iBbHvHcIiO2xeLyLGvWbaSvp5czk9MEcZfQqSmwhIGumURS44mnn2JucRGv4zM7\nc5aJmWnK+Sxre3pI/YBOHGObWWJNQ5Vy7JEVUhljB5JLrn8T+06dwPRc0qBDc6lC1GyQCyPyYcCQ\nYxDGbWJfdkuxDihRJwoCMrpNPp+jUMyfb8zPXWmadqWSVm69cif/3GPDMP4Vbf/cde6eYRjouoZl\n2mQyOeyMg27o3PnWn2BhvsX4iWluvvkNTE9PsfuOt2Ca5srr5XnQoNXxiFKIku6cB/FyXyQBQ9PJ\nmBaGZiB1k+/94El0M8O+gwcxyFAu9pOmHh987y188g8+wP33/Mm/+ryvvF41WIQQthDiOSHEfiHE\nESHE/1i53yOEeEwIcUII8agQovSK1/y6EGJcCHFMCHH7v/XeodugN2+RzzkMlnrxF5apN+rEpqRa\nr9FqNujtKRErl2xOp9Dr4IUtnv2n5xjIjiACQeK2yZgWuq74mfftxhppsvs9b6B3XR+5fMIPnnuM\nn37fexnbOExuKM9dv/AGshmL3/zQr3H11TvQMpIdl1zOzh0XoWKoLcxz0bomPAncAsXC63GGshys\n11hOXOaXl5j225woaey1Qcv1wc6t3LT5Yi5atYHtN1yHJmKaLZfleo2lEyeoVpaYbTWYczt4fhux\nMMsqIYimp7DaTdaYJp2wjWnYSGVTtwR5H5ZSn5xRQMSK5U6HfBAST4+jt5axHBs7Y+DFMZlSH6a0\nSF2fyU5CZyRPLCLu/6OPkY087JLFgDQxpIahdfuGIIkRskBDA38+A9uAf4bX76ogpQ1agohC9CRg\nsOxQzOkkaXenRWoaUqQYEjS6fU26slUqdW1FMznCMLtN/Cub+lcuj8VxiqbpDPbnGT89TjtMeOCB\nh3H9Brvf+mN0vAaPPfFdbrplF1EUct211yJJCWKPUIV4SbBiIqGhCYkuu0KB5zKfoWuYmqLc14uP\n4Mqrb+GB+x5AVzEbegtU/UVOHjzEJz78MXJzFY7+/bd45Atf/PcHi1LKB25WSu0ALqErDn4dP6JF\nHkBzqU5zqYG/3ES4EWM9Q2SERdYuIJTNqZNTFItl+nsGOXJonOnTc0ydnkOXDp1Wyr69R8gVewhN\nqLfaHDlxhly+QMNd5KorrmL9qo2YRg9/9bW/Zf2OtfSuK/Hc80dZWK6z58BeNmxYz+J8nWPjR9m+\n/WLu3H0H116mY+4DNsBUaHHoyCKZVKPvkq2Mmwmtusugnud1F17EzhuvYdXatYwUe2m0axza+zwy\njqjNLTI/MUncaBAv1zA7PlmnQKhpCMNApmA1m5hnZxg2dGzdJB8ktNIEteMqZsb6cHSNDamOSANk\nHDFNlchr8jqVoTq/zOxiBdKIjAZL8zPoAlIhsDIWs7UGQQB2JyDwGjS8NpECpWsopQikotps0AlC\nik4P+77ZB78I/Bm88+2zJGlKsafA0KoxpGmRCoFh2UgBGdvGsTPknCzFfB4NgSFkl9mLJPS7lHgh\nBJZlr2QP/V/QZLp7ZF1TJNd1CYOAQqFAs9lk165bkFJy5MgRrr/+eqIw5aX9x9ix/UriUONDH/hl\nMrpD5CVI9PMl3cpZ7QaJYZDJOKxet4bPf/mzxCrhs5//PJdddSmWOcDnP/VV2o1JVpf6WXpuP/2t\nGKfqkql2EM32vz9YVj7EuUmNSdfZpkbXIu/ulft3A29d+f28RZ5SahI4Z5H3r64wSfHCgNjUmQld\nJpcWOVOpsFhtYBg6pqFz843X8a633YGT0ZBS0VPo+rckSdT9R3shhqXTky2QkzmK1jCnZ2Y4PnmA\nKA7Bh//ygY/QN1DkpcMHaTY8ZGziRjEkNqODQ5imxK0GXH7xZfzKL2yDx4DboLK0mUsvvphksoqm\n2/S9bgcvNSrMNGtUpqYp5Iu0m01ap+eonj5DcnqaTCvETiRZx8QyDRxNIoOIfCYHpkGCxItSlJXB\n0BReHFMNQ3KJpJ7V6fv5D7Drzp9CpQqHhKJh4mkQ6CYNoaiSEBs2UurIlkvarCGjAA2FkpI0CBmf\nn6GR0cAGzZIoqXAltMMI0zTRHItMzmH9RRfQP7KKqf2XE+4yYBbGFjxueb2iubREMVcGdKJY4fsh\nRSeHTLuSVBnLQpOyCyunCkPTsQwdkSp0qZ1Hyrp0/pdh5nPZpcspM3CcLNValbm5WdasWcOLL7zI\nt775TY4cPszePXu46uorCUKXY8cP0dtXYM+Lz/NL/+E/krWzqFStIGtdfloul6Pc08OqVavYvGUT\nd/34XVx7/c387n//BA8++CRTU3XWrhsl9Ov0FPr5wd88gNloEy0vcnZxCalSMhn7RwsWIYQUQuyn\na4X3hFLqMK9ukTfzipf/Ly3yAOqtDs0oZMatM9FcwstoWLaF8APQUu58725kMeTW22/hwW99izfd\ndjuZjE6pN0cua7J563rcuEnetNBNna/d8zdYWYtdt7yeN+zaxcPfvw8vqNOiyYv7fsj99z7C63dv\nZ9X6IvuPPcmZs8eoVZe5eONFfPqzn+Uv7/kqvvtQN1huhYe/t8zBffsxay5zk6fI5QpsvOIq3JzF\n/IFxMkoQzldpN+usMfKsLfZRyHWNYPNCUM6Vuor3YUhfqZcl36XlB3iui8haSLvAqq0XMotPM2zR\nGyVc0Ih4/LNfIdE1Qk1gSR0jShhMbHryvTRNg7xSpJ0WuucyiMJMU9qRj65p2NkisabTbjVpyhQ/\nCImiGKlJpGWjazqaadJstDl+7AR7D+3lBz98nqe+Uepml8/Am982iabbTExOEMYxiRJknAKObaJU\njK4LNCkw9W6A2LZFwbLZuGoNYwNDGJpGlITEIiVfLICmd20QSLoSfJEPiYelh5TzGqQB9aV5Tp04\nyKWXbmHVcJmevM383DQv/vAptm5dg50RHJ88wZt234Fj2uzcsYOeQh5EimEY5PN5CoUCG9avp1Qq\nIzE5fmKCk6dnMNFYv+4CLtx5OS/+8/fYtWsHowMbeP7pp3nx+CGeOHaIQ6dPUYk9NOdHVKRU3fHt\nDiFEEXhECHHz//S8Ev+SB/Gv3uJ/dfPRZw+CJvHTGLOg4wwUMYWOsjXe/f73cGr5ONXA5I8+8/vc\ncusu3vXuOwh1nxPjZzl55Bij61az/YpL6C0WaLrLXLN1I5/9yz9k68WX89k//yJXXbOVmUmfb/7D\nPzK2aj0/PPXPPPa9Z2nVTPa+tJcPfvBXefQ7/8jOay/ld3/nN4hYYn2+AwcguRp2b/oE/oIg2Xs/\n2cpZJmZmiD1B2HIpSRCOQZKAMiUly6aqSeYXKmR0ExHGuFKSdprUhM7A8GrCfIaiodGXNTBH+jnS\nCEgT6JAwpVxWx4LHfveXWZW2mDUVwovI6Sm9UuIkCqfWJpEChUAkJpoQZIUASwfLQsYQS4FeHMDq\n+BxyXQbsApaQ5GKNOPIRrZg0kAxIjSHNYtmyWW9oVP6mQHrfInITXP+JNuu2WIyfjMlZ+ZWBY0Sk\nEuJErcC+Okkao5kGQRgy3DdAo17HyeZI0gQtFIgoIKcC1o/102NZZE0b27QIQw9NJZDEQMzk499k\nVCnSiUlOnXqWYnGY2WqL7RdvZ/0FFzC6fi25fJEzU3M8+MAD6JZGYbDA23beyZHDJzh+/CQZO4vj\n5LoKLvU6b3rTndxy+y1knR6efvIfCDJFwiN1solPGLl87APvY+PYGPuPHELYJqEUPDVxGk6P/2jB\n8oqgaAghHgJ28iNa5AHcfOkWMrZNtdPCk4Jjc7O4qUa+r8htt9+A+VJ3u3DXh67kySef49jkUerM\ncOzkIW655QaOnj5BYaCIKCjq1WVanmTjuo08+K2Huevdb+eev7uXU4cr5JwCb7rjDoazZf7rf/k4\nv/tbHwe9j/vveZS2t8zP7f55Pv+N/4EjfsCNWeAqODKd470/+RF+9rpb2d3TQys6i6y1yJo2VrlA\n2mmweGySIKcz5ClqkUczhdn5eWTOIRMm5Ab6MCxBnC9SXDeGmj5JtqdA7HbVGRfrddx2h3I7Reo6\nbuKD5pGkBqkfsVrL4CQxWRLMWBGnAkNKyqnAJ6JhKfw4hUhHej62MpDEFLIO+9t1ooxFoNoYiY4X\nuvT19EC1jiUsDDTcyjRJGFBTJqUJi1N782x6Zwu+DG952ySf/qNVQIJKFAhFnIJlOyRpSrzi2UIK\ntpOnFngYOiRBh4IhWV0cpGTrmEnXwUymMWnSIPElBTPTLVHtLHEcYJo2cZhSa7UoFnp5z6/+Otsu\nv5xEM4hQ6EnKsQOHufzSbbznZ36STMlBaBopgjhUfOQjH2XPnn1omiTVTXpGBhjd0MPQSJZHH3wQ\nldFZu6qM/9zz7L7rx/CExxsu2cGDzzzDHTt2UjQEvhXTiXwMS+P7x/6d/ixCiD4gVkrVhRAZ4Da6\n9p0/kkUeQEBKO+jgmoqFWhN0C98L6Mtn+frD36DSmqY23SF3x5t5+83vZbxyGg5n2HJRTLaQ4V13\n3cFzB/cwM71AbbbNoSWPky8tYOcz3Pu1++grbuakv4iXtMk5EcWi4NFHHuG//9ff4MDk83zmE/cS\nBW1+9b/9J971jnfQWfg6fKf7Fz73fA6tpVh1skOQxEw6El/T8aVJPVgmDn1yi4v4wxnqZ5pEvRkc\ny8Eu5RlYtQoxeZaak8FUIdO+YnFmkUqzypHFKgVT0h9J1ndCZlIXTNlFmYRNIBXIGBlqCGK0RIHo\nNrK6adJJIgbf/R4O/eNDZDothBF1B4MxdCyJb5jIIKZj6yRAKCUi6VoRYlgk0kRTJkmS0kg6yMFh\nJqKIpudRuHuATb/WgjfAO47VuPsra+i4JrFKkehErEzkhUClEZCSJooUgUwgb2foz1jkiBEoojBE\nL5TQDAu1skqcoFj2A4SXorkdDF0j9j2CMAGrwM//4se44KprSbqNDqYQKCnYtuMStmzaxNLCNH5k\nkS/1oltZpKb4s898mr17XuLDv/wRrt+5k2LR4fiBl7jrnXfyM+9/F9VaG7PZ5pg/T2wF/OCLf8uw\nnuUd11xB4rkokVIyLEQcI+JX1w3732WWYeDuFURL0jVgfVwIsQ/4hhDi54FJ4F3QtcgTQpyzyIv5\nNyzyADKOjUpjmvUWaRITBC5pHLG8WME0d7Bt+zamC0t85ut/w5urCzTbEROVRXoHTfpHCoRhwI2X\nX8ex4ktIXyCDlDBoM1Du4+ihM5x+8cWu2aaj8dS+F7jk0ksYGR7m8OwLfO2+v+Nzn/g0n/7yH3Hm\nzAT7Dx7gva/34JeAe+DJzytKPb1c3rOG6X37WHPppRiDJeLpRbzEZzyuUQgjBre/jiF1lqDHolBP\nmDFjnHKJ2Wde4jhtMu2AKjF2FDIgLGwzZWsgEaSISJDP5jhrpEwFErM8gLY4ThgJYsMmR0is64Sx\nwFIproq54t3vYv6Sm7ioXOTEl7+EFuiYwuOsZnLFh38VM3B57HNfIDR1HDRSNExSIpkyl4ToCtww\nJDE0NMumnM1hFfNYizWWT+ep5ubp2dzBfCjmTW9Z5p6v9qGQ3UySxF3HrzRF0w2EUF0ULlFkMwll\nS6IrRS0SmKZNb18PzWYdEQY0Gx2QKYi0OydRCkNKzFTHTTtkzSx33Pl2Nm/aRGVmlqGR4a5h0gpL\nWZGSmhqprhP7MbWFRRJVwSn3gYC+vh6+8udf4MD+fTiGxtatP0ZraYlSXy+F3hzZoknPxWvIWQ69\nVh5baFSbVY7PnsHrhOzcvh0VRSjtRwgWpdRB4F8pjymlqvwIFnkAepxQsCycVANdJ84YxArcdp3G\ncpUjp8+y64bbmD1Z57uP/ID3/cJPUunU2Dy6jsu3X8Zv/bff5pfe/4tct/V27v/277Fq1RiLs7OE\nYUzg+xi64s53vZF/euE5Jien+IuvfY2xoTJ2j4UIDf7wc3/KwkKNW2+/Ca9zmAE3hCp4WzSOHtbY\nmMkye/AgvbrO/mef5XUf+nke//7TCE1x0+23MzW7xLFnX2TY6OXM9Cx+pc7BpE0qJW5jgXLWYBSD\nNX5KLVYMlPppLs4hkSyJmELWxnRsCoUyW3fewtCmy9n/6Y+QMt6116aAGemkwkXFCbbUePLebzOs\nl6kYZTa/78O0dB0z1RhwBYtVi75mFWv1KGPrS7z4xDNd8mgagW5QKJZwMnmySifJGBCEK1CrjkoT\nIiU5+dh2rvjlZ+CT8LPf9Xji4T7aboCfxoQRhGFILpddoYgJ9DjANgzGijlyxERRQDNJCNOYTqVC\n5LvEcYTUjC6tXuuylnXdYN3adczOTBOEHT72X/8zA2Nj5B2HgtSpLyyQSLBtB8vJIlBYhsXA4DCG\nSImSBKnpSDPT9cHM5wHYcuE6Yi8AvUvsFKaJihRK0zD6BhjadBF3/cFmFmfmeOwfvo2+1KSUtRl7\n448x8eILuOP7XvXMvnYie0qQeiFlJ0eP7WBJwUBfEcsy2bf3AFpgceLFU1Qmljg7vcxTzzxHY2GW\nxx77Pnd/7W8Z6B/mG/d9g9Kg5PrLdvCm22/EKphMnV4gb9t86e4vcsFFWzCMmGwujy5Nbr3hJkaH\nBnnPu3+an33bz7F69Ua+8537ufmqFL4H3AKHTvRw5c4rWS8z5DoeZrNJf5zwg69/HcPt0KpX6cxW\nWD49yRVaLzNLC4xUWqz1U64UDjsSk8tyvWxrCxJ8HEsnDlr0rxqi7Uiigk1fYtCrW4i2R7tWIaou\nkPPrWGkHPeknCTOse9/vMfaR36IjFKGuI0RKD4JCtUkxN8LRMy0sLYfrCzqFEk5e8Mgzj+DnNTpZ\nRWSB0iSlUg8500EFAaHXot2qU63M05ibJapVSV0Xr93Am53nqb+2aV0PzED++Dxv2A1SKBzLxDRN\nSqUSzWaT0PNI4xgpNXqyIUUjIVEhiVQIkeJHAW7YIVIhURqCVNgZC0OCY2uUB/q5aOeVIBVrx0aw\ncwZ9Q/2kgUeqKyxTJ2caqMCnubjI/MwMM6cnCIKIdqzQ7CyaaXOORiCkXLHt0+i4blfAwsmilCKj\na0RRwNCadWBb6MODDF1+Ce/5nd/k4ltfz6SrOF6P+bmPfwK3Z+hVz+xrFix+EuHHIYHnoXyPkpMj\nn3UYGOzhtl2vZ356mQN7XmJ2cgInZzExMc3EsSk0Suzfd4JmPeXMVJNPffkbfPPeR/i93/wEOXMQ\nQ2bJmxZPPfNdHnviYb7yp3/BwEAvN9x8NZ/848/QarrceuU13HTdZdx08zauv+FyxoZOnp+vPPpE\nzMG9e7hkbDNxNSArcxRFgUFNQOQy1NNLc2Eed3EWe2aOmr/ISJSySVgMGhQ2KQAAIABJREFUBinL\nS3PMagnCzpA6WXrtAsLziBsN0kab1AuJVUwnSajrKQu2xvQLT3DmM78PnkfD1FC4ZAsDREtzJImJ\nrws8GVOTMdrW11GvThJXjnF07z/T8Kp0vAQqi4RnJykeOMnonnFu6h/C1HUCz0MkEXGzg5MKdNcn\nh05eN9DjrnKlCkOGM+D4Bi/+4xj8B+BL8P4POZRLeRzTwrZt4jhmcHAQx85gmRZKKvot+P0v/xmf\nffg73PiOtyB1jcT10KKYkXIfRcvCzJi4gYtjmdimzuLyMlahiC4EUwuzjG7bypG5M3gqQnMMnP4i\nViGLZupIBXk7y3D/APlCATub7VYiKzpN5+gzQghiT2EmNrXZBrWFOpWZs8wcPc7pA0eonjlLZ6GC\nUilGDFLovP3XPkz/RVvIRxoyDXh0OnrVM/uaBYsXKaJkhYCnNEi6Huydus+D33yIWqVGu9Uil7MI\nl5cQbZ2rbroK3ckyd7rOsecmWZ7osOeHe8mZOTLCYfLQCfKahZPv5Zlj+1i1ZYQ7f/od5JwybnWR\nu7/4dYYHx/jiN7/E//OpD7P7tt1sHZtiOHu0y0O4DQ4dyBEFAb07LkEbHqDRajGX1VB2gXapH3to\nlPZMhUvtIqkecfuuN9C/eSvFjesYHRulGcTkiyb1zhJZL0DVWph+QhAleJrN3tTlsIg5mIMzXoDR\nM8zqkVHCsElLU2iRS1baHPnUzzLztU9jaQGhSvC0gFU37abpRgjhkNl2HbkNOzHMIqG7xJMPfh1D\nBDi9WUwlWbNxDEOzSNKUXKyIIh+hCUxNYaUx0pK4XoMojDCETcNzsS0b/+A1cCfwXRDiCd7zoZ+j\nxzQp2Xkcy0ZLIpyMSRh5lLKCDat7cPIdQhnyzvd/mKGhke43vqahpCBBEocJ0nQwBAwbCiFthoaG\nsGRKyczxZx//U7ZdcCnCNJFCB6WjdB27mKcwUiY/XEYWDBKRop9nDHTXnaE79IyiiCjwiWVCHAf4\n7TZhs0PqNRgcHcTp66XQ1wtSEho6qdJQsYtZkAgke/ee4JqrL33VM/uaUfSjKOmiHkIjVQozm8MK\nXaQmqEYe5XIvbrWJ2/QZGRnk4ItHSKTP/OwijmlCpJAqREU6sZ5w0TUX0gzrJM2Ivfv3YTgaO7Zd\ngKmDbQs2btvMPz7zMDu2bOWS9Zfy8T/8Ew7t/wU+eM3zsAu4CSZTk5OndCwh2X/vQ1yU0ZnUfaw4\nJlMuM7plI3ue/SF5dLKxxA1aPPoPD7JrzUYKzRbC87Adkx5rgFR3WJIabRkxqSWgYoLBPqIo4ILe\nYZy8w1LBZ7LtkhkcoGZKZCS6U3IBIjVIhY4rXZzIJGM6aPr/y9x7B1l2VYf6394n3nz7dg7Tk2c0\nmpE0kpBQlkAiCBDYYJGTcQJjA8bhZ8CGR7IBG2QQwc+Bn22wjQy2iEKAJBSRUBxJM6PJPaFzuH3j\nyXvv98dtCWyD6tVzueRT1VXdXbdvh7tW77PXXuv7NI9++VPk0hgtNVp4yKhNU1iYbIVS3qHVDXAC\nzajOk+9ECJWC45OkGSmSIArxbBvfLeJTxCoNYMahG4ToKOHBr81x+Xt98iJCHKpz1VVD3PrVPhrN\nLkq5mAw8Zajm8rhCMbPQ4cuf+CtU3wSv/s13cuUvXcPBj30CWwpq1QIrSzNkVrWnnnAMnjRUSmUW\nZ+dwbRtjLJJOwIff/34++KcfXmtbWevvlwLRE68jRC+onzy1+49Nmq7j4FZ6trK+/hroHjvs/rtu\nZf3pp2GcPEb0sLOGbG2kI+YjH3kfP370CFMzs1x77cv52Ef++OfG7DOWLFGSYSwJvk9S9TgVNvGr\nRRxtcAJFq9mltdjC8nzmTq1gxS7dhQ5nbdnCKWeR5aU6+ZJPa6WBcgTz8yeoTQ7ilQqYVNBXq7Ln\n3gOI1GJxeZGbvnMrh4+c4sBpO3jp5a9i4eQcV5zxAGwA/hDmXi35kz8apur4jISKrY0OpfFRFpc0\n24Th5N7Hqe7eRcH36Pd9gnaHNE0pCodTcZsRS9K2bKbDgAVfc6K7QuIVcQfy1OwC7vpxJopFDu3b\nR7fdQocdvHyOUmOV0LWoC0FiFFIrhOOCyRDGIpOaunTIZwGzt36LzbHBRbPi+VS27uDEY7dRdmxC\nu0dd7KiMFIu5hx/g8gvP4cGH7iVLMuycT3VslI7KKFQHQAjsCHKFEq2gjc759BVKDG7bRGdOkH/e\nj+EWkG+9jWte9Wq+/q3bGLXhkQcfYGhkgPmFaXTXcOexaR44uEitf4Db736EN7/z7RjLI0Ew32gR\nGYlWhlLOQxJhVMa69ROUiyUsIUlTRalQJHN9/vWGr/DKN7zpJxRMoNdhtTY7w3+uVpmfuhWDNWZA\npjBaMTMzxRnnno10PdI1jpjBIJXg2IO3sf873+Liq6/kknMuYHnjGJaTe9qYfcaSJU4zUCCNRbcR\n4foWjfkljNak0lBfamKlAs+zCGOFLTwWjy9QdmwWp+eZ3LAOJRM8u4wnHZpRSNKMCE2ClXdY6tZZ\nPrbC6Ogwh/ZOsboU8LZffwefvf46br7xFnKuhbTpdbr9Drz5FevJWhYX797OWYs2g/dOoWZnuOS0\n01k8cIxRIUgbTdpTR+m3Xdo6o+La9GmH6VaThpMjEZrVvMvWnVtZpwyVYh+NNMXu1FmYn6NvfIJs\nZYUgESRJh8LYEIPGsBxFdHIuSgsqwsKono3LkpLTXvA29NgWVr7+t5QX5mlaisxyKF38HJx1W2gf\nfRShmuQLVWTaIdUaZdu0U8PNd9/C5MZ1+C1BF0GQpgRK0Zib70EzjKAetylWKhxbXKajUvLlMg/d\n4nD1VcANYN7wLYLpjdQqw2BnnLXrbO6773b6B8os15t0ZI7lTLK00MCZX+HeX3k71hoL+eRyE8e2\nGRsqYlu9FUNKRRJ1yOVKiCzFc12MVqhUUyzV0GtJ8aQ8SRowQv6MNOld/26FMRotLIQUaG1YmDvJ\n2MZJjLSRWqCF6fEJtGHq+zczkobs+ee/Z9vSHCPPvgzKE08bs8/cpKRlYVuSTCd4nsdi0KLTCigU\nynRWO/gih+UZklYdbQEiZKgyQBIabKvI0myTOOlQLRZxKxlpGEK7hLZS+ko1Vo81oAvzJ+aoVovU\n5xa5/lOf5qKLLuJNb3wtb33HrxJkgrxvoA2/+drX8tUvf4tqrcSBHzxE3tj0KZvZY1O4hQIjGyf5\n4UN72GoVsVWANIay0uSVYrRSYWDHRor5HG5jlczyScpFomPTLBZ98vNLtCxBog2ZMthCMpqroo3N\nahqyHIeMbNvKyftnEZYLmaFjW/RpTTtapdpYJVg6icCghEFJydSP70Td/QMKMkJ6OewkQFk2jiNI\ntcb2HCy7yOqpORynQBTDilTIKKakbcqWhWdprM4q/W6BmUSx3nXplz4Hb8q4+u+Bt4GwfswnP/cJ\nFrsOOo1IVYxtSVbrLaRXQGaGTCdEUpFiiI0NGGzL7uFVHYdjC4uctWEUS9ggUsLmCvc98lhvolIF\nqCxjKWwyPD6BEZIelUyscf5+1nryMy6jESRk+IhMszRznHWTo6Q42FriCE2MJCWDTpc0bLGqEpSX\n47E7bsNyNCNXvPBpv8Uzt7IkGYklek6/ekZR+OR8izhOKVsaXwgGijWs/hpBGJJzXfr7aywkbdIw\nxsnlKdkediugYXwmd20gtQKChiJoNOi0AzIR4SY2aZQwMVbjd9/1W3zmf3+Wr33H4rQzzySOj5Ov\ndaAOS+k+Xvnmq7nxU1/h4uJOWo15cp6HEjZxqFln59Ck5KwcWWLjFMo4QYhWCWFzlajeh5UmyHbE\n1I8ewLdAtTtYZOSlQyHTlKVPw8nT6HYYLPjE2tBIEhYXOmw8+wIOuUN0pMFJu1hZRmZs2nd8i9Ww\ng2/nCJwiIgvI0g6uiQikTd/wBnJZiBEKy3dwhU0sBBpBtxkRpE3cTGHZhkazxdjQCHGjjePmiJOM\nwMpoCk3HV2R9/RT8MusaNkc7Dps3pHiPw6YdTVbvy5OomLwDoRLIQhWF1VNTJBGOlVKrVqkOjWPb\nNsv1ZeIwIo4jxvsHUEYyuG4Twckp8rZP2XM4kqRYWUI3Cpmfm6O9OIc2vYlI1igxGDBCIHs48qdd\nYTQSKcBIxcLMCXZfeCndRGF5AoNC4uBpxZc//D7spEPec/E8H8tIHrrjDtbH/0OrYXpttFRhcIxh\nzM8x6vusy+fYXupnfaWKo1NUGlFxbbwsY3FmmvnlVdAWWRCRE3myvMu5z9tNTEyu1EdHR/RP9nH1\ny67kkpedxfrTx4gCTb2b8cDex/jg+z/Kq1/2ClABfmEE+oE6pMeWOHTfE1xV3cFQCOUzt8K2TTRG\nBzk1VuGEpRizLESnw3JfhZf/1WdYyeWQ0lCWNtHCEuHcIu5SA9NsMTk2SqmvhCUEqTJYmeqVTitF\nUiFZCTssBQFB1GHdWB+n9jzK5i07OO/Fv4i2HIRIyIxBhW1Cr8AZb/5dVM0nswSRFmRCUj19B/sr\nhv2FhE6niT41x7C0ed3zr+ZtV76ID1/1AtZJG1dq8jpDhhkJhlYY0MEQeRahgKbJOKkkt59a4IfH\np9h/4jCP35XvHTvfAhdc2KSbJiRGESQZKstQWYolIE0z0ixGoyiW8jhub2Vprjao1+skSUK9u8rU\n4jK37zlEXB1iMc0oDVSwPRfbcpmZ7akG9/7gJvb/8HasNTpNxhqvWKW9JNC9w1CQaxt4+RPSjAGJ\nhdSG9vIitUqRTBYo+AUsIdEGNBo01CqS2+56kMUgJkXg5QsUK0NM3/fQ08bsM6eckBZC2kg0mdG9\ndgNbY3kClSginZF5kkYckQsSSm6eILMpeA4qb4jjDs04ZHjzMMudRaTjM3OixdJsl+0b+zFOxhlb\nzmbP3f8Mnk03EFxx6TXsP34YlTXZfc5OllYfYrIGrEDu4UWOfW+awkCRsy+9mvrMcep2zA8ff5Rd\nu87gjkfvY3eqsKVkfbXMzG23MxwmVI2DygTacoiSlIqSOGlCrVJmJuhSzZfQRXDaAftnTxHmCkjP\nR1gghMb3ffJByqpq4uZ97v7mPzIg81jao+vaaJXH3riFfccPE7QXyTKFbbt0BcyvzhEMlUiNxTWT\n2zm3OkSzmsM9Nk+hHWHCFq5tsHVvHxRZFqueRcOzkN0OsSvw8jkqrsfFV76Avffdj9dXZHOhzNLj\nwFWPw5/C+Z8OCD9eQRiDwCBEhqVV7zXTMdIGx3YQQtJYbVKvL9NptQFQAprtCG0kLVvQPdhFOjnW\nnzhMHCvySuEVPMZGN6PDjOl7HmFl73G8aoXa+Dhj27dSGhumow22ULhCoozACAnGwqLn505JwQJX\npMzNHGd8ch2YhFQLLPFkmcAiCRqUfIfS5BbCMGA+jamuG0TbFuVK+elj9r89K37OpdKeCTfKehST\nxLbIPEmGYjVokPeKpGlGkGr6q/1k7ZhcLo+QMathG993QWXEi6s0hI8q52ksrOCmNscOTFGuVdg8\nuYsoMPiuAT/lSzd+ie07NnHJxefzsU98mFdfNQG1U1CHS97xOu67/yYeb5/kiscOMhcv0LV9dlWH\nWKjPsl3bKKNwkRSmFzjxDzfQj0ViKTCGbquF6StRKeSoxRlHpmdopSlWt4VVreJKSXVilPXrtnLq\noXtxohjdavT2Oc2YNO9igoCJqEnLleREjK0iVKLQhw6xcnIW45aRsoFQhk5R0Cmk9PeXkdUyjxxZ\nYJ1yEd2AfGKYFyH3lDo8kcswKWSuwLgOHZ2y+awzaJ9cQDgWgRTUV+tkjx9htFxhauoUwmtTUiOo\n9z+O9TCcuSlmoJxQb/YGuzKToeOQfM7tlbANRFGIY7tMTc+RJRHwpJPSgNFYdk8TsdII8HzF7T/8\nIeO+jYOgubhIqgRWsUot1bSbTbLlFeaOHmfq7jsYGKty1mXn44yMIws5MmkjsFFC4aaGRr0Fto3r\neDQ7K6zMzbF+/WaCuQXKE6NPUTK1ELQWj7LaiGnl+hkuhRhjmFrqsH7CJrGdp43ZZyxZEq1Radaz\nRMUZIT1/iCVcEmxazTYq620UF5oN/AyKdg4jE1yrZ/kySuF0MurNZaqTo4x4NTpZl9ZcnbO3n8FX\nv/IVan0FtmxaRz1cZXpmiv1P7OG2W79PrVKj07iPAkAbDh38Nhd5fZxIHCrLK+wql/l+VCcUKecs\nCfwgwbYkfcKiGmsyYZOaDGF6YDadCrQRhI6krG2eWFlm/Zk7iZqrpIUcVmiI3DzLs4vMzs5SNoZB\nY6GSkKKfZ0VpBqpDzCwuUTnjXKLZJ8jNL9MtxeRDBeU8+Z1bWLn7doRvY9b1oTxDFAe0ybh/yGd2\n/gT5pZDnbttO7blns27HGMt//Fn8xQ5OOUfJclhstWmVA1pBBztRBFKSWZJG6wSDW7awYcfzOfr4\nEwwkLvuPepzxrBh5N1xwccx3vtMrrUocjE7oBqtYlgEJntsTCbXbbWzRm4y0LKtXypU8JUOybUGc\narSVoynBLeXxhSENQ2T/CMtBl5yXBw3tWOEoSdReIZv5JiU/o1Yu4fQV0Y5FYjxmFldZbiVEiUVo\nFBvOmmR86w4+/+m/ZvvIZl74xmuxiw6ZsHFUysrULHsOHCUSNaRK8Up9fPWmB3jjC3aTHx162ph9\nxpLFIBBrI6iW7dK1NHg2cZwitIUrLIIkAqGZTxtMrBtlKYkZrhYwShN2NbZwaFkZnqnQmmoh7JSS\n7+Bphz233Idfi0AqFqamML7N+jPXMT4+yr59+3nFCwXDJ+fgTtDXQ/onozQX9rJlaBg/9Viu+AzO\nCyrawk0CbFsyahzyGLQNek2vZ5DITGO0xJYufRsmGWh3aJBwzoUX86Pvf4OFpQZmaYllnTIysZWi\nsJBZhCdtSrk8rSAidW2WrYTh85/N0PBWnnjsAfAkfpondmLSoM7cw4/gbOqjU/WIshgLi26nSVcr\nDrY0J4qS933u41xy3qWkpDz0yc/h9Q1gFiNkGCCkh2tplmbmKJYLpJ2IgVKFUBhUo8H0448Rjq4g\nleTI9Ay7HxiBq07ALXDJpRE33ZTvvXbaoLXqJYq2npp9f/jhR7BtB6Oyp2j7BvHUIaNlWSidgYAt\nO3ZRrOTZOFDlru9+E5FmNIfGcZKQzDiA04PkoWhGsBDlqAqNu9DFpo0jBUmq6Tg2890Y2y7Q7qzy\nojf/ItLro7MsiEZyaCxYkyoZJPNTxwhSGMlHnGp0aNUF4xOjpEHMYqP9tDH7jG3wVc9QhzKgPXsN\n0pYQqhRsTXUoj3QFCEllpMpqEDB/qsX+Q9PsvvwC6nFIq5vjil94HkGWEkcxUjn4iaHP1+w8dx3D\nI4MMFirkY4uyLDNamOD0LZNUcy7XXHw/vBX4GNx99ySPf+Ug3bzHOZufRdfTVLZtYjJfxVnt0pYW\npYlBKmGKUAJe8zLG/uK9NAoWKkzoCo3MBCcbIcdOHmd6apq5UzP8eN9jrEw1KXdjNnsVJkWObneV\nKF2h6HoUtCBwYEUmGBv2Tc+j6ssc+vYXsNCYWJOiCaRg1Y/IKiHduE0+7TCSr+BG4IeKgUCwvlQj\n38n4h3d9mA+98dfJOpLnv+KlLEcrWJUC/ZaP1AnDro2wLPLj4wSOYm52ClYXcaOYftenffgUthJE\nXpkD9/hPbfIvuSzqtZlo0BZIG7TQKFLSLGap0SIyGm1itEkxZCidoHUKOsOoFIzCShWWNoRhl2MH\nT/DE/qPIOMFKY5TS6MwhTTKyJCGNAuIkoK1jFgk4rmOOxZKDsc3eQHJAOZyMBfNhzFIS0kGR78sx\nM7NCrFIoFiAnETLDxiEzEUePHCXwyoxIA9Jj/8mTbBgZpGul/NNN9zxtzD5zpeM1TE6qDGmokUWN\n9DIcKQjaESYVCC1JQkO8FBPGIcV8gY27NvDw4UcpjhfIVkOOHNtPviIp9Q+wErVpOIqXv/oVfO3m\n71PuF8SqQ+ZofFdx+vnbWawf4ovXjeP/9X2QB/X6AQYf+jNWCu9lB1Xs33oxxXedhDsfJ5e2GKmV\ncdoRxZNNZqugk4T2V27k4a/ciCEmKrhEWULiBGRDBdrtFXbgUOoGnEmOtlehnSqOBEvMRm3Gav2M\nqBxRN6EtbepRSGx5VKuDTJT76XS6RNKw8czTiaMmT5w4SmV4gJInMUIjleTUiVMESR0cF9t18RyP\npKNAKrq24nCjTph0OTpf51+++T3e+6rXkkSLZHaBTqdN2olYeXg/OwYG6RQcwrBL7OZYUYpyXz9W\ntcKGZ13C3kdvI9p5CH/asNFXDG/MWD5qr52k//sy7pMryZOQip9GHykhMcIQRzGu7TC6boJApcyt\nzKO7NiZLkVmG1oosyzBGoFVGZjRp1tN0W5ZFJiTCGDKlkNLCUuB6Hlmm0TKlVPRJEsX+vU+Q6oxS\nqYRtWxgUmVB4wmFmYYk4bBEkmlzJJkkMOUvRjRXN9v9Qin6YKmJlerJOI1GpIMs0SkEaGZYXOmvy\nTYFtXFwpsEoZgRWwcedWLrjyMuyyYW5pnm2nj9NNusR+yOve82r2t/ew5fx+zj7vLKpjoxTHhphf\nWeGGf7uBxZO3kV/+V/gA8Hl4+PAvMtsIecmb38gVuWHUl29GtALaScRK0iLQEZKMIBO4XZ9Sx8Iz\nKZNGYU0OceFn/hev+NCH6GhBd3aWcG6V9TLHNrtAWK9zSC3zyOoMDSul7OcoZRbRYB8Hc4rjHlDr\nY2zdJDljaJ84zqbaKDXlceqRvQRzc2x/zrPYes5OhO2QxIajJxex7T7K5Rrv/8AHUFqRqoQobjC0\nbpwvfevrvO6tv4ZTrHDRZVdxbPoIv/CaV7KaRNhLbTYkNmfi4wUhB5dnWam4zDsSp5Bn44b1+I6k\nvTBD+4mHCI+fYvqRMlwB3AqXXBSt9a3pp2xfTyaJ/gmv9Sl215OESN/PMzY2wfr1m3jhi6+hUC4j\nbMH55+ym2V6ilca0Mk2UKjI0mVbEaUKqVI80aSBJM+IsI9CKyGhi0+stTLXC8jw0mi3b1+MXS0yd\nmEVYEi/n985pDGijsTJYatZpzh7FtwSNTkKpUEaHDcJQMDr2M9kqP/m9/jsT4umuTAoSDJFKEFav\ntcG1K+hYI42D1B5xkKCJSLoN/LzFznM3MjFZ46EHHuWBHz1Mq53QbMbs3HU2k5u2sH58F/sen6bV\nSPilF17LzLE6p2aWmZ1fpej2c96zL+TVL9yP+F3gV+G+VpH3/cr3OP5nNzDyLw/hO+Ddv4fESem6\nEpFaVCKJJTzCWo0NH3w7jw15WLEGZeEvhqSxZqbZoA+XUS1pO3BL6xTOlvXsOXaYmWaLWqVMLXNo\n6Ix7TZ3JzVsZHJ1g/cA4BQWCiM7iLCOORBZcYqMxpFhxwnnnnMUPbr2VXKFAvdXmzAufRddkbNy2\ngQcevpeBoRqvftVLufHrX+Xv/ukr/MuN/8buCy6gVHI5dGgvEyPr6Sw18ctFcq6HlHbvFN6zaJuM\n3EA/O8/ajZdkdBbmKIsMJ2gyvTCDt2Uz+x4afepW7LJLwx6UW6mnAHpK9VycltXDqlarVQqlIpMb\n1jMxMcHw8DADg4Os1FcRUrJ/3wEuPe8ifumF1xCvNti+bZKh8TGGJ9fjeDZxHBKmCbHKiJOEONUk\nKiPJNEmmyeIMnYDJIAoThOWQKgHKojRYRgiPVqeLFFAtV3rYKGEhtWJxZo4kU2zbPIkLHDh6mGKx\niAk7HD5+ilq1+nQh+wyuLFFIN4jIlCAMExqrLUycUS1WeyA2NBaKouVSLQ0wVBime7iLrqeo5YD6\n9BI6k7z+NW/lpS98Fa94/Uu4+Bd2cuLI4yweWuLPP/m/ufyS53POuecTJTF+0cEsfoHNhyL4EUS/\nB+/+SJUL2zVO0z7dpWmKqSSIQ9q2Zi5dBatArAOkbXj2xz9I/ZzncMkLXo+T9xBWRinNmPrDD3Dg\n859FepKCLlIjz4hf4eGHH8QUXGzPZ0F3OeorNg1NsmXDFvo2rKMTJNTbq0SLizRXVsjZFgP5Ik/s\nfQQ7l2Nc2th+ldGJfrZt2chS/Qh/8N7fY3V1li1bxojiiNNO20Y7DXnOtW9gZPt5LK90eM3rf5mC\nn+fL132WT73lV1j88hfwxXGElRL6glMmpFGwGBvoZ2d1nOXpOZamj9PQEX4rJbQktdoAY/3jbNp5\nPnsemeyRF26Byy6OKRRyFHJ5crkcruvieR625eI6OTAW3U5ImqbMz8+zXF+h3mwwW19mYsN6pG3j\n2pqpfffyyQ/+Hu1uCyTkfYdWq46wJY7jkaYxSRaSmJ6HJU1ThDQIqZGWBgISQrSvSEUbTYs0VVRL\ng7iej2tFVPwSxeE8Qhuk6bEzZ6eP84Zffj1l22BsFyUFY+UK+9xRrnjTO7nixS942ph9xvYsni3o\nRAFxHFAsFinn81hGsLq8iiUtsjRmdGgQXzjgOwxvH6JvIoftSa4YX8eD9z9K2kn53r9+m3/+27/h\njOedQdO08Qo1ZmdOcMbOHXznlm9x5LFDZA2NM9Tgt17b7rFpPgtf+PsBxo85vCapkT/ZZCUVJK97\nGZ2/+RKiG+EIn6XxYfzFBD+Cm977fgpXnM+Jm75HjRCZWRgV9/i6QlNWEsux8FRG17aQZZ+TjXnG\nzt2FWG6yrlRGRprOaoNYGdJ2m7aAMdcj02CVCsxkERPDw5hTIY7IyBzJsYNTtDoRl1/xfD760U8Q\nJSm/8zvv5vrr/4KB2gRZFGPFmh/e9G0evf8ObANOGNM4coLdI8O0141x9bW/y/dueQ3dxlFqtRoD\nTo5Gu0U3TEltjaxO4iU9V6Mu+cTYzM8e41R3BZbatCs2JZlRWzJs2BHwxP5cz+a1hlfouSMBDJ6X\nw5A9BeiuVqssN1vMzkzRV8mjU83miW10Vk8jyzLiToZr+4yPTNJutGjGq2jBmvceLMuGNYqlI3uz\nLAYHWxtSYjITkyiDa1VQOiLTvcLDkSP7ec/b34nrSjxPksv5aEt7tlcSAAAgAElEQVQQ1lvowObA\nkcN0SwWWjs4wumMTf3njjUj19LriZyxZqjkXO1No0VMUxGFC3s9RyBfodCN8xyIOAxAOWSKYmzW0\ncUi6MWk3pbvSIOlmnExitK3odBTTS8t4sY9jPOxMcs75u0kWM44vTPH2t0zjXpfATji2zeH2P+zj\nncWNNFcUectQrzqc8buv4dTf/QuBbBP29bH9dS9j6c++gHZgbG6Z7MbvsckkZMqh4wrsVBBbgpbI\nyDkOky+/mnXbd3Hw//97XrTpdI4t1WkXB9HaIVtpsdhtoRaWsTdtZOOWSVpTJxFG9FyLSZeunZCv\nDpCTFpbnsuRb3PyD25ibb/KN73wXZQponfLnf/5JHGnxR+/9Y9xCjt9757vZ2j/Cy17yQvqLHiem\njnDx865g2/MuIzYShcOQV+Bx10FmEZ1uhGUMFRsaOsPO5SnmS6jlVer1FiKJ8VVKyfGpjo1x8J4S\nz7pqdW11Cdn3qA1rcG5jeqDwJ41gYRjCmptRCEGz0cASFlmsiIOUdQNldBLh+TksA5Z2UYmEtT7j\nNXg/WLIH83NsJIYkUzRbK5QHBxhfN4odzeO4vTLD4MQkueIQ0/MnSZXirN3nsnjgBJ4UCKVIA0XY\njlH0Ro8j5eFZOTKZo+rkOHngJJMjQ1x+0SXc+f27fm7MPmPJMjI2wuLiCs1Gi07QBSSN1TbKGOI0\nQSKJgoRMZuDYqIUOzeWMoZEB6tMrmEziGB/PdgiVxaN3HmB0fJBWfRUpNQsnZxgdHqG11OCiCwIu\nHa7DZ4CH4XMfW89r+rcyPC8Zu/4POPCeP6G4GnLsr27Aizs0qhZBfRX10X9iuZpS7qTYrkNOJRgp\nsF0PL41Ilabt5rClhRvB+NZdBNVBTnU6yMcPMttoMDw+weG9RxgSNoWiTbVQItSKnOUTOD5hmpIz\nBmlsKqmFygyR75KPNNdc+3IW7YwoTlntLjA738HSLsvLS8xOz2CUopF0yXW6NK0mi1OHOO+1r+Ss\na6/utbVLQyH1IIyh7HLpc57L/of3MD87hy0FFWMzXi7SSBL8nM/83DS+41OSLp5R2O02fYNjHP5x\nfy9ZboDLfifgrz9fQj15jrLGL5ZPtcr3NvpPzpkYY7CFwJEeKrbZtWkDnXYHLWwslWFb+bUWmt6b\nEUkP5icFCM3W7dt51RvfxNbTTiNb+36d1gyf/OCbybJllFBMz55iZGwd5170FmyR46orL8eWcNvN\n3ydMUrpBiK1tctKm3aiTRQGxDUHSxKkM40eKE3PH+acbDz1tzD5jyfLYocOorKevE1auVypMNUaD\nrW2U1mjtEEmNMAohU6Q0zB2fA2NhMoHQgkKtTJ/rkqUZajEiZxyEK2icaPGj47dTKzr80YfqcC3w\nHrj78DDr6zu5aCHEecdrkQP9TDQTTqQR+tN/QyMNEPWUsjB03AUKmYVr5VHG0I5T0kqRqFqlU1+m\nHKUok+Eam5pls+eTn2FZJJTCmLpKcXwHd3mJft+nqzOibtDrBlzpUF5fZqXRoOS5TFg5po1mXgWE\nYYuBaoHcqsuX/u4GTukAyxXYuSpRIvBFgjCGuNPt/fe3M8ppSF6H1PxDZMvfhJObUP4o0q0gi2Mo\nv8jZ17yYqR/ciUwyCr7PcqxwnDJVz2X/oSMUxscYzfnkMtDaIKol4iRmsbXC0qMT8O4j8DY4/0sp\nrivoxvqnHCyq10ovBAYNBoT4KWi3EiRJTNGNOG/XFr522wNI7aBFjlQLjMzwXIdmp8m5zz6LDZOb\nGRkeZ2JigqHRCsVaracalwLLJFz38bdSKHSIYxuVCnSoWZ1tsO/umyhmionRkMvOs7n6xb+CZQ0i\n5QjtTsw999zDPbffw8pih/bBg+zevoHF+hKDfSWc0hh+yeaBhx//uTH7zI0VJ72+ISn0UzBp6JX4\nEOC4T4pzwPNtsiwmM5pyro98Lk8aRGRJzKte9VJuuOGG3tdiyOU9hsp9jA8Pk+mYq16yl+J36zAH\n7TcJvvK+03jBDFhOjoev/xIl6xtsM71u1KUoQDg9qohwDUEGqecirYg0i5Bujtz5z2bbb7yJ6lyD\n77/3fTjZCjYpQmZ0myGFnI9JuzTdImEc0780zdjkRh4/cgjX9hCpYiZskB5Z5bxSHpEZjmUBCzoj\nV+7DKTqsW8k4JmO2rtuFZwLCeAlpxZg8GMvgSIs4Z6GKNuUw4rRqlXNHxpi/ZZrOdx9mQB+jNTbC\nyrVb2PLSDRgj2XH6Wdzz6b/p2dFKNZySZqrTpDpfZ1xalCJFvlQiUwFhYlC2zfYdW8kPj+GOjtJ1\nD1LYMIe/F849L+aOu/596PxkJfmJAOnJlceWBk8IrrrofEyakCmLDBvQuLZACcOmHVsY3zDBuedd\nRLVapVarcfToITpRQNnq9Xc9+siD3H7n37DSOUjBKfUwsEZiUlAmoDm/j6P7NzA4cDq18ixOdivh\nisF384iu4vzTCly4+yIK/Vtxc1sIo4BbbrqNW++4g3rzOML5H6rJAwvLAmGepKrbT4kypdVrjfA9\nH9excRyJ7VSxbRuMIIkj4rTFwECFvfse4uzdO4njmJznUCyUUEj2PLqXgVqdZ5/1WE+W8VW4674r\nufhUiRd8+N10f/VTnBOBiVssVhNkJGmpFJWleJaFE3eZHt7Gc9/7Pu78y+uxju7D0ZqZB+/h8cN7\nqboForhOZoGXSTpG0S5IDIrYK3Phb7wTRwoOfPl6NuXbREEXZ7SGjaBkJNFci1BKprMOaV+JcqVI\nPN9hfL5BSMoZuUHmw5gzzr6IPT/6N2r2Cjk7JQhdUAbXctBzAS9OB7jsrMvQ9Rb2ckizZLPwwtMZ\nfssvkMv1o7waxiTodhM7ynBsF51ziOcWCXTChJenL4uZa9TZsXMD3WbKro2bkbkcxhIYUohSlk7t\nonDVHNwCl14RcvePqv9JZPTk9eSZy5NKCCk0I9USZ2wYYbBUotsJ0TkH15V0gjY7ztiBV8nz/Xtu\n5R+/8Y9s2jyJYwt27z6TV7709WjtIUTCg/fdzsHH9hPGNoEfMVpxsYxNmmREHY0lJEvhErZ3PoWB\njGbjYSw/YrV7gLwTkZcCS9ZJ20dIu0XiWPLcC/u4/OKXU6ichrH6+NLf/vzR4mcsWSzHAiOwMCB6\nQztCSwqFAlIYXM/DcRyKro2UGi9nUy77SOGjTUitthmtAqrlGsXhQeJuRGO1SRBnrKw0GO4f4A2/\nOoX9CQMvgqmhGo3PD7Lt+AJHPnAdXtZBlSSKLsXIYT4XYOwyYrWFZVIiL8fQGadxatsQO1/zJh7/\nkz8gJiMfQrG7SiZWWCEhp23cTKHwGBSCeWFjG8FCfYl2HNHKcpQmNjAynGfu2EmspRZbtmzj9pkl\nFkzCBbURyq7DVBxzdq7AnnadgU0bsUKbbbbFv333q+zespnStIcZtKkdn0H5klgqrNRjBsVf3noH\nv/GH76D9+yOUt5+H47isfvMODgxOceH5Z9OcOkX/6AiBBbLsMn18mnJiMT46ysz0LBvKPptGKgwO\nFlk/3octLHAkgVYYo7DjlNnDm9nwkh/AO+EXvtPhTz9aQfd2zAhj1m6/emO7olf7R5he4tiuAJVQ\nKxQwhQH6h/qZmZunPDJCuX89qjjA6375rXz9l65icKiAYAGtbQ7sf5xPnfo05z/rfK656sU8+5yz\nuPP2f6LdlRRyGqvSkyrlK0VUmODYsHj0hyzUL+a2fR32PHI3z7tyE4f33c9YzbBlwxh9tYRqMU+x\nehaEB7CYJ2wdgs5DqOi/hm/9b7vyeQdHWkgh8f0eYb2Yy+M5AmlBra9GX61GX6nUu8UJmwRBB60k\nBgtDjOM5FLwc62QRd2KcR4IDRN0mw9UBhKc5Y9d0zyRzD+z7xmUM7jlJf+ZTnQ9ZLAiqH3oX4bv+\nnAXRazKc/MVruevb/8ymzhImtuD2H1Mn48CDj+EahcAiVhqBpGsU2nURmcS2DC3HB7qMJxFtITly\nw1eIPA8ravHdQ4/0nPWzdU4rDrBv9gSbnnUWizPzSHySOELpGM/pwykUUHGG6ctz/PATnA5Uml0E\nHpt2nMlh6dBZXqI1nGfnEyv0n3Maz3vVK8he9hwK0iZd7RDakLoh559+NjMnjjF8+mmEbahNjnFy\naZpIGIqeRRx3cWsehbEa60aG8T1v7TWxSI3BsgTdOEDaATOHNhP/dgmv22Z81nD5FRG33Zp7qvVF\nCvGfyCu9krJF3vco5z0EitAIGq0u+YEBdl50IZdcfBn1dsC1174UaZoIoNtp9FwpvkOsEr5zy37+\n7KN/jAigPADSydNszZOqPK4vcS0b4VtoQobcPHfd9FUuf9mvoOoVfnTbzbi5FvmcSzMcwE/WMVp5\nPsb2SDmCVBF5P+mNYj/9Xdj/XbIIISzgQWDaGHONEKIG3ACsZ411bIxprD32PcBbAAW8wxjz/Z/1\nnKfv2EDO8ZBC0FfuY2R4GBtJLmcTRF2SNCWOIlLVRqcaKST5fIF2q4u0HMrFPvqG+injMVwYYP/c\nPF6hQjEVGCM5Y/dh3Nsz2Azphs24PzBMtDOSkoNJBF07ov9lV7Dy+3/BrJXibDsP59Wv5znbt3D8\nAx8gsjrk0xbW3bcznKWElkVmDIkQJMLQFab357PglHJ57oc/Qa4k+d47f5NSmrBJJ4RRykkRUVpq\no0SG6itxVCf0j40wNrGOMIyZXmxQWK2z/cwddFdTAroUjGL24GGqnsuWSLBwYpptV7+YRx/cw3LW\nIt8V/P4HP0LnLX/KqUML9BdGkG2B6Ze4gwU6d92LWzDEJmVo6y5sZVN/8CHm2iuISoHOUoskMQzY\nCWdunqCW93pS1CdJKVojpcYy4Eh6TpVIMXvkcjb+2rfhr+D1r+vyw9vya6O/a1Biftpk/BPqSqYS\nBquDYDQnZ47jWYbS8BiWneeOu2/n5m9+FYc6lX6HnJ+AUUBIppZxrDyVwRxbzxzj0bumaC1EFEoS\nkQnm64ZKKaDoxfhWjkzHqBbsP3A3UQi7z9uIO76TfDVkyxmb2bzlbA4dnCXVXRw9gbC3oLMWSkmM\nSMA8vYD1/3ZleSc92Hdp7eMnNXmfEEL8f2sf/+F/0OSNA7cIIbatOV7+3XXFZReS83NUCkWSbohj\nO2RRbxNfKlVYmJ+nPFClE7R7/UEqQWubocEq1UqNThgStzV1O+TA3D5sy6V/aIhSrY+rnncNhdIb\n4PXAG2B137MZOrlEoW+A0tvfxt6PXY/fjEm/+F0yFTKZeRxZmqJ0/BDf/8xn6VcBUls0HEMhVcSu\ni6s0mTBg9TqhlbbwNfhaIxxwYkEUBPhKYoQgERmWkPSnDi3VYcPYOsS6IWw/R7bSYP7AETLbMN2s\n85pzz2bq6CE6VpnVLMXr1JlQgvV+gbaJcLTL/fv2MJbPsTTbxqmW+ZeP/jmsHGIsXyXYNUDwtW+Q\nX15FbiwwFCrC87bglArUn5jjy5/8NGF7mU7YQXlF2o7DShqxtLTC2ZPrsDKrJw1eG881gFEpQoIt\nHTKVIZKAYw8+m41v/jacBs/7s5DBwYzFRav3eL1WqIF/t48RQqCEYnJ0lDRJ2HT2dpZu/i7pvIUw\nmhMLB7Bli2LOUKpUkCLAdlxsKYmCgE6UsXXXKNlgl/7xKjPTC8wvBpRyeY5NpUxu8AisAGk0ibJo\nHK3TaLss3XuYuUaDSy4eoeb1sXX9mzGs47TNYKwMhI/nPwvhWyBaqGSK7qkf/deSRQgxAbwI+Cjw\n7rVPvxS4fO39vwduX0uYpzR5wHEhxJOavPv+4/NaacyWrVsBWNYZURwRpSH1ehsvl2dwZATbkvT3\n10jT3oFX0S8yvmk7AxPjeMUixijiuIm0XCxLIoSNNiDFCrJ1J9wMfA6OvLFOv0yYKfZx7iuvYOy6\nL9JqzLP6J39BN2nhSJfJ6Xke/e3fpKI8IpknsRMKCWAcZJLR9B2SQoW+5QUaJkPnJJYR5FKBkTEP\nfuhdGNPTkdqZC3aCnRlarqCT81kpFRgvVeksLNAMI+aNwi33s2PrWSzFGUtpQpbTLLUCNmmXmmUz\n165TTj2Un1Ku2RR2n8YF2SYOf/1mOkvz4MEDVhvrIx9n4t4nmOwK7GqF5qZh4hNzPLDvi5xwYTFN\nCJOEtrbwM4mJY2QGTbvCgyurXDpSxZEW2ih61GCDZTtIYZMqQ+bE2EFEc2k9db2T2nP3YX8VXvWa\nNp/7VKEHS5TiJxT6njj4qRWm6uUpe732+QuvvIr8x/+KPiHpNGYp2A7dJKFv1Me2mmjLxbFsfCnR\nlkcjjUnCBGzJ+MZRwlQzuxCgIk1Xw+xJged4SEtxarqFlh6lvM368SpPHDnOwalj/Ol7ryWNjuO4\nQ2ROiiVyxGmEYymU8jGyimtvohV/47+WLMB1wO8DPz2g/HSavJ9OjJ+rybv06pdTrfYjpMMWDDpT\nSEsCFsZIoiTD931AI4QFxhAlIb7noZQiRQM2fq4fozUIgdKqt9EUX0X8m4HnwGpjA7s/+Bfs+7V3\nUG22ue0lr+PMboe6ZVAqIXNtdBhhtEtKTOS1KcYWtja0fchkhm1yXPnFv2M6CXjwf72H6MQMlu61\nqKdSooRHpjW2MEgt0ZZD15JYKiU2GY6StBotphcXGRodopmGzDfbEIY0ClVOBgHNMMFym2yrDeLU\nV4myGKevyJJlMXn2mSw6kpxnk9mSTS96Lvfc+A28NCNSCfff8SOOmIxdfplzRY25Y0t8tztPPQcd\n49ARCt8SKCPJl4poW+CkkGSGvVMn2TXQhxOpXtVRZtiy54xRqUIIuzd8lKWEYZupvZdR+/V98D54\n3b8GfP66PKx1GSu1dnK/trA8qeDuq1SIAoNXGyezinzmy3/Lde/9ADnXx3ZqTAwOkVgxCysLlKsK\n6UIqE4Slybop0ydOUi4NkmkDUrJ13TjLx0+Rz1yC1ZDFqE7/UJV8qUIYLzMxPI7VbnLu6Wey79BR\nbr3/Mc68+GwWTn6BoclfJss6+DojMA0c0WLvvf/AXV/bg5M/9v+eLEKIlwCLxphHhBBX/KzH/L9q\n8q67/ouwhuO8/PJLueKKy8gEvZZ9KZE5iwSDpdewHoDj5siMRkvryR8QZQxGr7nahejpncU/wpeA\n34ajN/Yx/jKodPX/Ye7NozQrq7Pv3z2c4Rlqrq6qnmgaaJo5IiAgqCiKJhFwJBLBKdEvaoyvSYzR\nJCZ+MTHRGGMSTRRNnGJUjCIOhDgBAiKCIFNPdDc9VHfXXM94hnt6/zjVRRvf8A3v9y1yatWqWuup\n9aynztn7vvd97WtfF4ko2HqwQyGqpW857xNqMUFr8BYhA5n1HFI1UgvjFlKbEXk4/MjDjB6/md7c\nAomKqA00KNodnDcY69FBUBAQUtNtpjztpZfznX/5AsEUiDIwPjmJ6fc52GrjvSGSAUzBqWedhgua\ng488iLUdNtQHydtLLMeBjZvWM7FhEtWosTZtYlxJIlK6Y8M85fm/yO0330QRPL0o5cgAtKXmS3Ia\nsXYSVXhSK/G+pCkl0nu6ulLWF1LiXDUn30Ny1/5pnn/iJmSQ5KUnSQRaCiKlKUuLR1EaS2R7zO4+\nF/PqBtF8j02LjoueUXDLbQlBitVzyn9W+lqYmaNcP8bY+o3EZoGhznY+8N7f4I//4L0MrzkTFyQZ\nhmg0ZnS8QVQX7Nj9ECb0EaJGe6Hg8GN7EWIYaSCEHms3bMA5GBke4tCBx4gTzVM3b2KptZfNayYY\nGRimvmETew7N89CjLf7onZ9BlJrf/j9HMPkppBPnIPJ5bv6Pe7n7nmVEYy1Cngo89F8G8v/VzvJ0\n4AohxC8BKTAohPgs/x/Y5L37T96JDFXZtGJXgxJVYnhYvd0/c9sDCCFRHN3mPQhLkFVDUgpBENsQ\n+x+EB8FdpohfOMbiJ99RmX+6QFxaZlKo5QN42cE4QxTAesuAkOw3ozz1Tz7E2jUp33z7b3B8mTFb\nDyz/1R/Rpc541mV/M2G4zCmVo1AeK/yKNI/AYjn9qsspTzoOMz6CnC+oO8/0gYMcdo5SOoaMYzhI\noppkefogPmqg8hJmDiHXCdacvJHjNm+AqFYxpnwgyksSC/0mDJuIxnGbOPsFv8SPt++grSRxUbKj\nqRlxglrf0Y0dhZPIICmdJciA81D082r1kmBCoECx/dA8505OMtGEQnmESJHS0IxiEqcJPlCWljLv\n0u8WHNrzLDb92rfgOnjlq3Juua0yLl3t6AuPCI+Ldqe1BgvtOdaeMIIrlijzNo3hQd71yS/zhpe+\nlMmRQcaHB9nUSHCmQbHsOH7sKZjIEg0Gtj/0CINJzK5HHmOoXoO0wAzWef2b3sbB/fv59Ec+hpvP\nqEVLtMwyZzzzqVAMMXe4pNXK6ZY9ZpYy/u2zPyQigbolDyn77/4scs+NvPsP38ncQsbEmtfwp+/9\ny/93yRJCeBfwrpUb8Szgd0MI1woh3s//pk1e8AKPw4tK/FmsVMsiCGQ4Zm36z0MEYfWzAQK/oqrO\nSvII/hX+BXgZzN21hoH5mMJ2EMbhCMyOpNR7lpbqI3SELh11r+hHEud6TCQJfjFjrw30ZY1CtrAu\nELmImshYUJAWObHoM2QcJTGlkLhgK5qsjtj++S9SBE9kLDYInFIMZW2G1kyxZ2mRupD0Mdgy0J5p\nMaRyTK+DHRpg4pQppgaHiKTGiRInE5youtxOhIqTpjQEWLtuHVv6HXbO7McrwaDz+EjQddVEpVAr\nu7StuFtBJTywfRqrLFZYgldI4enGmhu37+SVp59KPAS2FARZkgmBUFFlIYcEb/AW9j1yCZte9y04\nE17wZzljY00WFtTK46lm7j1uZT5fEkcxBw53aY6Nk5fQOOsy0HVmf3wHKsto4SnKDkpYBhsj1Aea\nTI6Mo6OEvJ3zlKlTUWmT0zefyb0/uRdHDqbg5q/9M1JEvPG3f4NaOsi9D9xP/8g4S61H2Hzcy9i+\n/w686TA5dRxzvVn27v8C1nYZn/gFJobPY8tF7+ekC69mcdf1xL6JWPNE2fD/vM9ytKT6C/43bfKO\nrkKSozXuile6rIw9jxrVBF8dEo9SKI55g4qAV+VMtesEj+SLVQl2HRz+0iSD/T5OWerWsFSTnPWl\nz3Pzr7+d9NF7UWqQnpKsufxylg/Nou+/m3XlMjOffDd7rCXQ4XBDMmBSjLL0UTixQskxAeElJtbI\nxhAuL9BFSS4CsrTEBIogKaSkr2NGXI5ZmGF9c5gDRYvhOAKf0C5yotEavbZn3cQESRwhQiCWAqck\nLjgK60ErnCmAgIwqhElLzemnnMbs8jKdMqtUHL0nrkUYU1LkhqLICXiSGiSpoTRtjFUrGgiVOn0A\n9gXD3l6X0/UgRc2TeIdwGr2i0KJDoMgyfNmnv7CBBbWFsYt3Ed8Av/KKjI9+pLkSIX7F816uBox1\nKYhlln96J3P9wMnnXcjc0iEGhxVRo4nWAq1jhLQYk2GzwGy/xUC9RqM5VPXhGpp+abngrK1Y7yht\nia8FHn3sMW777jdYu34TLhpgcPxUityxsPAgFz39NO67425O2ByxefJsthx/LUE4ZIhB9OgU1xMp\nQXPThczf9Ql2zbzrCYP///bwVwjh1hDCFSu/L4YQnhtCODmEcNnRHsvKa38eQjgphHBKCOHm/+r9\njk7agSCIFZuFY0Sej1ImpBCrB8Wf+0wrXeKjrwh+gLjvMBQQzh+hf/sQpYT89JPYNzJICIHOw7sZ\nHGpia9AsPSapMf7rr2bDG1/PdJBkCGI3z8m+zZSI6OWKpWDIQ6AlPRnVlF4QilJpFuqKs990NZe+\n8VU4GWh4h4ksTnqKWJGPNrnwtb9FW2mU9ejCMd5o0Mgtg3FMogRREmEoSSJweYYS1eBbRa53JMoh\nbI60OcIaKAuULQnWEHvBM84+lwiJsa4Skisd3gvKwgGCej3FeUucpIAk+EqgbrUfskKV/4/de1ku\nApkrscbgXKU6HGuNDgZRFNh+D1eW7N95Kbwe+Di88pq80uXyP9shECuNytbSLBNr1zC+ZpDNgwVq\nz+2w64fsveVbJDJgCUidEsWDiLSOV5I4VvhQYEyLxdm9tGYepeY7rBuMScoeW6Ym2dhYz8Vbz+eZ\np1/A2nSAzoFplOyj5STbHtjNnm17ed5zzkUHy1AzJbddnAyg+1hfZzC9lJj1JLVnsv7pn+bci89/\nwhx48mzylKbaEsTKlySEo8DlysbiHcsLs1VTDF8hXSt/EYJfTSARqsRB3lS1Sn8Vlg89gyIvWZaB\nWaUY/dWXgg9se+cfET/6MD6pgXCMO+jcch8Ld/+UAZ1QINFlnTQkjHlFGg2wGMVk3lD4sgowrcii\nagJvvOW58wMf5TsfuY4RPFqVrLGeUadRmSFxkm4ySC40Qcb4ImO4U7AmrrO0vEgt0RyaXWZszQS1\nhq501JzFO4eUoKRHC9CiOtM5W6IAb0vwDm8tI81BLnveZXgEJji6ec5iu0Wa1kjTlKIoCE7hbI04\nHsB78Ct6Z1CVtHXjmdeS7+zYhssK+rakdAbnDd5adAhECvrdNkWRs3/7OfSfqeAAnJQ7LriwXE2+\nYzljEjCUnHLmVvJ+CxkN0nGa0cRxVv0A9RRKZyidr+y6Q4yXMVZEmKApnaPWSJAxtJfnmD+4j9gV\nlK0FNowqtmwYYOOYYMMQPOuUrWwcg9NPPo0Lz/llHrxnP/W6prXc5kc/uhNnDKwYu1rdJzcZOjqL\nXniYdrwDqV7+hDH7JBIpf9ZTIyAIweFWSi4VLIuzM9xy0w285OqrITTwMuLo4lWWJVrr1fJMCgmh\nAwvA02HpkYjas59DsmGc+LobqY+N4DOJ0F1y6/FrRnDtjKbrMvdXf8GeSNAUOUYFSl2iXIJUlot/\n4+0sbD6dW9/7dmTrAJiCIpnCtJeJpYUoYa0z9GKB94JaiIkQGA9OO3S/w08/9VdMFMuI+gReZaQG\nyqxkLG3Q77d55vv+gdZHPsxxUzkxMVpWtPcQFML6CryQCSCvuO0AACAASURBVIeOHGZszTq8s0gR\ngzU471nuOnRjgOc+8xKuv+FLDAw3SWoxHdOj2+2ihKz6INkS/byNFArrBR6LFCuwrwhEPmFfmTHj\nQdlAMCVSBGppitYNlM8QLsE5QxSGOLD/Yra+7la4Dq69NuOHdw4B1Wdf7eRLSVyLeP6LnkcqHQfv\n+BZl1qZ0msx5hockS32F9xrvq3EHW0qcECRxjC8dSgQSJ1FSEtUEcaQg9Oi25smyLt1un7VT69my\ndgyphiicpd10XPGCC5BJyjlPfS4uRHzsQ+9hw5bNnHHmczjxpLNQKsLSQZtHqXMBIVr+z0H6M9eT\nJwy+oiPlXUnp+hSmQ6AEn+N9h26+TDSQcMVV19IvqrKnEoWurjRNV1VE4OdLgGAte+/byykv+UXG\nSof5zu30QgFBM2M0m699Nd0NkwgbkK6HFjlRz2CsYLSQpNYQO0jUEG1TRwymRArqT30BJ/3uJxl4\n5gtxSUlfl7RijXcZEYFDMsKccx6HYoh0IC5zxlqLhGiAkWe9iMlfek1lDBo0o7ljXU/x2Meuo/XY\nNIfnPEkIDChPFDlsWSJCgRYOjWVqYpRUeoQrkL5EmJzWwizSGVzeZzhtcNWVLwcPeeno9Ht4AYW3\noARSQppEeBxBhsoHxXucc5UEt3Asy8D1991Dz8aYosB5j0HiQ1mJ55UlRaeNcJ7De66EXwP+BX75\n0pyhIfdz5kJCCPr5EhPHHc+3bv4x+5YkgxObEMYwIhW/cNwWQtHC+xJrI4wXFDZQWEfuBIWLyK0m\nKxWFVTgVU3qJC4q83yJNYGRY4/0SCzOPMLPvXub33MuQ63Dmho2cNjnJsHckvQV+Ye0a1gfJ9I+/\nywN33EBnYS++M0O5sI3l5d/Eq799wph90nYWa0q0UhhnSNIaBIezDilAywQZJ0gZ4xHEVGhBCAEl\nBXgHwVceHiLgVx76MbnEkcemOaE3wj1XvYlxHSikJESaUnu0kozXFAdnFuirgJHQD5Y0lewZrGHq\nkwwcmWfcttjzjx/gYBwRFQeIckl+6BDrwhxz2QKFFehYI6wlCQlW1Xj2G99CecpWTPI5Dt/9PcZU\ngg+Gjgo85ZynsPfIQUqlMLYkEoG4dKQ/vRPfnOLu6R5H9k1z6cWnoWyPlDpSWrwUWCASkuAMkYTC\nFYSgGW7UKy5VCGA8wsLTL3gGN936bcpuHyEVaa2GRmCDJY4jQtaHlZWfFSNT6yXCWQSeUtX5wcGD\nXLJ5HNnPQDVQqalqAVOSZx3yPMfOnMDD7ZTTz8tJvwlXXVXw8Y/XObbtpgRMTEwxOH4CL3zru/HC\n8aPvfpXa6Inc8C9f4MHdOyiTlFGRE7ynpIYWVWIb7/BKIEVYkVWCVCUYbwlSIVwNjACh0FITvKdR\nh9qopx4voVxJq9snCMXQ4AQjw+PUk5h+4QnlQQ7d+whZXsflgXr6POrr1z1hzD5pyRKntWrUOkoI\nSJSKK3gSqFisAAJx9FwCKxYDHoGugkNKfKi6xgi3QsCrLt9uMVoOUNAncmBljSPasyEPLCrDvR/4\nGJPeIXxgSUTUvaVfwvP/6IOMnbSBH1/1WgZ8TCObZ72RGFKEDjSWDvDIB96KUI56OkgeSoxyiNLT\nEyU/ufFGzmpezX0/vp+ai3FkFY08SH704T8lK3NkyJCDTUK/xUgW6DQHqU2toT0zy+YXX8P9t36e\nU886jsEow5HinCQWiuAq+Pgochi8QyIxxiKURWrPYKPBUG2UZ5//LO68/3ZcWRBr8MGRGwhaIpSG\n0iNX4HqUxLmA9xCcxBG4f3qGLVMjnFhTGNdHhwaOgAqBst3D9TpEjSFm91/O6a+/Hj4M13ysz8ev\nSxEBrAgkUUykE6580UuBOhZDb3Y35553Nh/9m09z535B6TRDUbXg+EgjsTgnKqlXE1BHnbwEGO+J\nVkLWeIGznkaUEKuqrLTSUyKYGEg57aknIActXjWJYkGIm+RFncUjOf25BdJ+zuRQTDpWx5mYnuvS\nb+19wph98s4sXlBlhEBwFKMPKz9BSH5mS19FwyrYjFXG3wpu7L1DycdXtLqn4j17hUkU/VjzvJtv\n4paLX0RCB2XaRC7BOUEkqw58L4XlXbvYqA2p6+M8IALGeVIEpRJ4V6A9KCGoWUkU6vS1xOuCyHn8\n7AFu+9CfEAmPaGhE25LLhGBBlMtESnP2L7+cPTt2sLh7J6U0xEVBSs543mV+736yg/P8yAnOPusk\nBqN8xTErBiRaBlxwRFJWHjfS4VxJMJXyhyKBQnHWCacw1Ez4/h23UGLI84zZuUUagwM4F5Ci0v/1\nPlT3PayMA7Oy40Qxt/9kO+ue8TSSLKPQEVJpcCWUfYp2m8aEwS6/mN4vX0/jzXAKjnPPM9x/d1Q9\nTe+IhONFL3kRij4Ly/tJyjZf/ddv8aPv/wiXO8yKZQQm4IQH4VZK6srCQqLwyhFFGiElZelQClQQ\nxEpQ5DkiicBBVIOyVMzMlHTv3MHFr30lMI4XAwQSkjRm7ZBi/ckWIxTKZ7SWHmV5Zju9QxmpfmJ1\nlyftzCJWKOH+GOREHPNF+FnaxLF+57BKCK9eDB6t9Sq2D9CQCYU29FUdeemlBCxtZ5izFucdDouy\nJfs2TJGjSBy4vMu2T36cW972O8i8pMCSe4cVgSAMCY7gLYUtSXPDsipZe83lHP/SFxHSJi54+iaj\nbwKLRJzziy+nbK6jiBOssHjtQHjuuukrdPc+TDOUlHGlHbz86KMMm5xHf/gdRKmY6UTc+dAsyyZQ\nGIOjj9UBoUFr0Fhq0hMFjyYgrSEUJa6fkbeWsP0eaxtreM75z2buyBK9XsnGteuoRwl4hxABZ+1K\nwlTnFufc6u82GI5IwcOHZujmhl6ZQ3DEOoDLyTuLtBeWMD3Nt787Dq8BPgGvenUOAZSSxNIRectQ\nbOl39zE6rFH1IT712Rt4bHaRnitZai1RWE/hLA6FtY+jdForut1KrPvoiLJzFmMMnXaHblYSVERh\nLL2ixAZJWQ12Yk3E9z5xHfvvvYfgIqRXICVORvS1RsoIEY0wMn4Rx5/+KrZe9mY2P+fNTxizT54w\nuPccyyg7Ohfxn2fVxH/1fbTmPmqfFkD4GBIgAxHnFElgaXycyVddieoV3P7Cq9niPO1I4kMDUdZ4\n2kfeR/q0cwgSyijg8y7WBhyuquxUoBMKlNZgS3ywZMqzLEtCvUY8ciLN0fWUviADprHMa9gwPMYd\nN36Nvuugk4gsBCwCgSFxIF1EdMbT0Ou3EMka0gmcdNSdISCIGwMcmp1hm3F0kh6FdwQsedFChoKi\n38G7nAiLpkS4qgejnacmFNpDCJLhxgiv+5XXMFYfrnYPHxio1SFQjWmzQhvimAUpQOwCRqbctWcv\nixI63RylFVpWvZ8y72LzHtoLQvlr8OvAZ+CKy3IGh6rNP5Fw4dPOYnA4oTkwgDU1vvaV23BiiChp\nMru8hIwSkBFeKKw1j3vaG4MpDbVaSlmWGGMoyxLrKucxKQTWO3r9PqUJCCKWWzml8eQux7kO2mpm\nHryFH37tPYTsbpYOfg8lWyRBVvGCwdPCy4Jg76bf2/GEMfukJUtpLU6olZFUS8ACjnBMBlXJ4P6X\n33JlyzamqISkg8LLdRUb7SAUawPpK36VqdkO27/wWWIbGF1awqqchvOk0tAdEBz68rcQZ2wEa0l8\njMXRU1WZcyiRHDYrugD9Ng2pcCGhH2IWVZ/ZXpdb//79PPiJDzFdenY3a7RdD20dxewROkVg8+Wv\nY+iplxILhQsBR0YaPP26ZO3EBtLjz6Q7OkkkDE0j8D6irSTlkT1s3JjxOx+6gSv+5N/xp84R8iWk\nTBDB0WzUSLUkFZBKSSRBi4IQ+pSmR6+9RDCGWGpGGsNcdfkrOOvU8xgaWUuZO+QxHfyjDcrVRnCk\n8UKhnKErE7656yDL0lNkpjo7iYCyhmxxHuM9DXERD+cRnAn1/4CXvbhy+jpl61Ze9KxLeNMb3sbn\nrvsqe3ce4KtfvYGFwjLXapHqQawRzMwuIgSk0qAUaC1BeKwr6ec9jLU4X32+PMuQUYyLEqxxFEWB\ndwaPwyEpbUYaR1DGRKWkVThGByboTi+wuOMmRChwIcKJNt52mN/1ddqPvB0Z5mjUz3vCmH3yFCmV\nxhMIODh6ZhH83M6yinD5FfWQ1ZGJsOoLcpQO48J61Ebg32Hi2X30iWcy99EbiL7+Y/KywCeBttQY\nMUh50lqSHTspv/hNcg3DUtMhsOxLEg/LQfKLH/wQ0zPLfP397wQfcZJzNITgUKwZLCdxIaNruyzU\njuPiV72VH379MyTOkeAZCIGmlOz/5udpGk0tQF80GTCQa8Fw23L33bdxwe/9JfK4k1j40j8ymVta\nPsNog7RwwZVtoo96+Bd4wV238bE/Oo2JpVMYqkmIPUomFQfLGhIZY7yrziO6ujeu6OIKjfZ16nHM\n2VvPoVeAqA3ywP33IGxB8AaER+t4tZnonVthGwWEh4VWl5nMsTZ1yAgiFaoOf96j010iUoY7vn8K\np7/hQbgOrv6bnE98ocHczEEmJwf4x898AmRC0DUem54j6xmU1JSlARkohODw3BJrJ0aQvsR7T5Ik\nCCEpy7J63iGgI4XWEcZVn7ORJCAcYoUdja7g5UMzCwzV6ySpQphAtm0v/bmcVJVsu/3TNDWoYi+t\nOcVwMsz4xBBh/8P4sf+uZxbh8C6rGKrHbP//BZUMWHnNh1UhhMe7xAo8eD9VcZ73Q32yx+zr34MV\nOfOJIYhAWnoy5UjPP5OnXPdBZrRHmB5J0cVqh1aCRMfkHrpCsX96hsXlZfJeQRwZChdobLmQX3r7\n33Py8y8lTSXr6g3G8oLpO7/L8NJ+xkwH5UuKIImCZKzISESfXr3J2Vf/GoeGJglIEikYznPu/4cP\nsvS1zzMiOiw0crrDY7jmKF5q1p2UwzYgB14NL3nHLn7c2c5cUHSzAmNyZDDUJdSlJw4lSShJvEH5\nAm0M2uaErEXo9Wn4wGVPuxjtUtatOwGtqqau0o8r4Ve9q0oWsnIADpRectO9D3G4n+GFI5aeRAW0\nK8g68wzENUYbb6R/GfAAnF4znHt2oJc5/uzDH+GlL3w5V152BRecdT4Cydj42OrCF4KgtIF9R+Z5\ndHqezEA3d5igyFdEx4+eo8rCYIzHmoB30MtLSuvp5jkmCKwTlFZAPEIvpPRcgqOOMSn7pjtMHxhg\n70/3sf/B3czsHcX0G2SyZMl3KGaWKPbf84Qx++SdWYocH/xqnXq0BPi5yRi/oicmWeWQueB/BiU7\nKiPqwiQcBxyAeKqgX3Yx2jLSLXGRppfGpM7T/uG99B7ZQZAphZBEPiJSEdJ6tE4w3lMGz51/8z62\nffSvacqU0tbxUY7ccyePfuAdZDd9l5N6jjELIyHH7byPQWcpVQ2HpYwjBo/fihEpCBgwdY4c6rEc\nDeAkdCNHXJQ0Dz9Kku/FWs+sm2TtL1zK6NZfwBEzvqmAncA/AzOw5s8N77puGzOj32WuZyiMxRYZ\neIv2jnRFyiMGQlkgTYGwJb4ssLZHv7tEmmdc9ZxLWTs4jBB6hdPl0BriWGJdUckfscIbo0LOimSA\nn+zcRTfPUaIyQ5V4iqyDz0tsVuc7t47AtcAn4JpfzbAMsGvB8dhhw959S2gTkYiYfq+PlLICZaQk\nKy259Sy0ejyybRuHjxxhZnaOol9gjFlpF1TPvfSOfplXgEAQlMbR6WVV38cGylBZj+eloMwdvTzQ\nKjyLbolFkTNrLAv9Qdr5Ip2iQ3c5Z+GQ5fChDnM7mk8Ys+KJVvL/vy4hRMh70ygdgY6BqOKGHaUQ\n+1ChZd4hpF3l8xCgKAJJnCBXxmAhHCUo43xGYsdhEHwbHjz7abQLqAXDaZ/7Z775smsYSev4boaS\nGhMMTq7M1Ecpl3zqk3zxbz9I++47sdbR8AKpFDOhoBGnjLpAI4soZIYEpIjII8lyOsRcFJF02+hQ\n0JKgGaY9MszWZpPOgR3ENqYbFZQ+oRkyclnDiYi6yKqaWziyqS3sPTJN2jCMj57A+779NWobA+yp\n/nd+HdgLfA5uO9Bg22cuYm1tjDi2NOImJiikjil8ivEKg8cJjRMRDomOUmSthk1iZHOAz97wcYam\nHD1XUBQ5InMIYmaWWgRbYzFr02llaBJUKJFecs3TnsK5Y0OUKiUPCaY+SHPd8aTDY4Tmj3nJBX8H\nz4b+Nnjb76zjB7cOYKzhhZdcTK/d4Z4HH0bFEUVezdV0ez2EEKsaY85XzA5JoJFqxkeGGRoeRquq\nuSo1RLFGa0kkI6QMKOGItSJVGqEEkZZIGYi0ItIVjK2URAqJVFUPbyjVxFIhRcnAICgliOKY86/5\nGOFYqsgx15O2s8zOziGkXgFoLB5LCLbaOYTCBvBCAzH4BEGClCmRjlfgME81cvz4ewZZI8RrYATk\nAvg1sOlNryPzDQ7ML1KunYKxEWqNGK9yrDAoB3hHw2QsJIJzrrwW7SOUUhRS0goOqySdAuRJZ4A2\nlUecrHoTMkhqTnPWhefTq2liL5hE0xQdxjqH6B/cR4SopjCTUQakwQdJb2w9Z73lD+lNbSaIQDA5\ny9MPoOw0tUXBpqufRy0LoKCjJa950yYOfEjDW4BnwzN39HjJX3+HAyMPo/IaPstQrgRbokOJoo80\nOcLkSJMTyoyy7GP6bcj6xH3LSy57FrE+wsBQB9/r0IgsQ4OG44+XrJ0oOeOUMTZOjaCDQwaBjFJu\nf/Bh8hVhCuEtLuuStRbxRYHtnMNulcLFUL8CPvbHh3jHHxxmsC5pdzs8uH0bXkC/LAhC0Ov3iaKI\nWq22evZ0SGyoyrN+YTg0M8vOR3ez98BBZucX6PcLvAPnfGVuVJZYRzVtWRoK6yiso/RUkLS1eKEo\nbWBlnyQ3hqXCcahtmO0F5nvQKQRzrf+mzl+j4xN4H1ZYtmXVWPOeYHJUyMB2kaG/oqPrCMEQgkHp\nCgk7Cnce27jUXhDYuHpu2fDmZzM7OMDcRMz233w3dmkRWv3K4sFJtG5wYGCQw40Bgvd8/zW/yQ/e\n/Q5GraNpFDIofBAIJ+jXmlz8tt9nSeUUSmHlSvPUOkbaSyz/+3+wrmeQKiYJEe60p5OPnIJQntIJ\n2jrw8j/7O4paExEbWiFgJk4i101K7+gJhfCelJgo8Zj9N1Ul2Mnw2B7NIw82eeWVm/nqwFClcvAl\nWHOt581vewD33G8wryxlUSJMibAZyuckBBoChCmRtiT4HFP2EEWPorXEiNrAGeuew9p0HWdvnWKk\nZhgdMEytq7Nu0wBDU5rhScHoZI0gJKX1HMozuissArxFh5L+8jw2z4hVzC1fv4a5v1Pwy8C58Op6\nm8//2z5qtWm6vS5BgXWOxXaLpF5DR/pnmMpeSDxQumrcAKHwQdLPCg4cnGbnjkfZvfsx5mYXmV1c\nIi8t/dKS5SWg8DZgbAV0GOvIS0dpHc5BURqK0gCSMoAB8lLSzSULLctC2/18oB5zPWnJUm8MIGSE\nFDGRisFJYl3JtU7ve5S8u8yRg/uRAR7d8QiP7d7O9L5H6MweojU/gwwlAksQuvLkQOIRiLBu9dyy\nON7hkU9/mXSpg6bFSCdH9rrEQZMHzaJQXHrd33PZB9/LfKo4eW6OM1wLEXsioOGqbTvyNQZVxI1/\n+xGkUQhjCS4gjcF5SzfqoWSfIRyJDzgvOO0FL+WS33s3eRojKOnieWjXNjrNJjVTY9P8NLv+4JU0\n9t2HDw4fYrpRQl2MUtBhVD7wM8linKXTV7znD9fwtg+uY+kGWdnXnQMviBY593dvRK5v07MlgoI4\nVkSyJIiSJJFE2hEJMNaT5Tm9/jTdzjKbJk9jTdxA2GnGRwT4DOksPgYVQzIQGJ2KccpQSyJU2uDA\n0jIaQyQdrrQkwdKbn626+upiXnb5iXz7rBRuAt4NWz9s+OP338sJp2k63S69TpexoWGkkDjrMMas\nCPLpqpUQBEFKclfRVywCKSO0jFBC0llus2vXbvbs3MvskXnmZheZm2+x0O2z3MsojKOwBlNCXhh6\n/ZzMlPStIbOGvCwpihLvLQ5DXljml5ZZauVPGLNPWrLMz04jlCKzFpQmSmtktqD0MLXpBBrDk0yt\n30zR77F161ZO3HwSa9etZ3jNGIOjQ3hf8Ya0sOjgCWWBF6ZKlo3AAVh+8DuMdOdRhaHmFC7ROCXQ\nMjDeLxhE4B7ax4BQRDn0lSB3NRKv0BHkHmxskDVLVPao7bwfQUIIgdzXUM+4gvCU8xG+TigVuXcI\nURC7Ftv/+gM8cv2nSJY7lBgS69n24T+lcWiGZW0J0oDrE2yOiQJSw7B1uLTNmJFMbl4+JllUxXSg\nEt6+5bsDvPzFm/nBxXX4BvBu2PSekme+5VsMPfdBunmG6mVEwRJ5gxaGWElioZEiqpAv4zBZTihj\nTlh/AdqP02ykDA41sGWEcYYyD2jqZJ2SxkANZwps6bn9oUeQOsYIh04iQplRdGax/QVEUfDrr3w7\nf/L203n3V8bI7wB2wpp3OT7+iX2ctDkwNTYFViKdRMoYrWOUinDOY8sCLRVSaKTU9AtPWTpc4ZBO\nYKzHWodCkaiIxcML7Ny2mx07H2P/7Cyzy232H5plablHbh3ISoPAh0CRW3qZoZ3n9EuPC5AVJf3c\nAAnWP7Ek5ZOWLHfdfhvelWghKlQsFGgNwXm0rEPQSBljVY+y7IOMUWIUL2KQKagU6z22u8hj2+9j\nz8P3ctdNN7J3e2c1WYaHW4xmBQUFhQokNjAc6pQXncFCcwBve/z0L/+cf3/z21DSYZQneEOEY+yl\nL+fcv/0H6ideTFRExGVJUhSkzqHw+M2bUa97E8f9xm8ziwIFWkg0GhtbRuws0W3fJteBOhp8TqYN\nevMUzbE15MHgI49TKbVzriB+2rNYps6pl13DjBxg6gT7eLLsVlhbIYdQweZzs5K3vGE9f/aNCbI7\ngAGIzoOL1j7M6b/1bYqBNt5DEiQ4g5clrsiIhUOHkiR4RL9LNr8MZg2nb/1VsnyAfmYwXuJ6jn63\noN8p0SiU9djgsSqwP+8xayFNUrQ0aOVRrqQzO0O5vETkHG96/RsQvdfz+jdtpP9FYB9M/mHgc184\nwuT6BbwoCKJESotUHucLStOnVqsdM0ZejRsHFO0yp+stJQKPQCqNMxYlJQONJlpKDh+Y5uGHtrNj\nzwF2H5zl4OHDHDh0hE5WkJlAbn1F2pU1lI7JjMUJsNaSGUvu/5tqHf/ilVdx5MB+ptZtQEgN3tPr\n9wgeTDZPWZbMzc7QbS3TWm7RWm4TEHR782w9cQuHDhyhHiU0BxN6vYypyUkGdMTiUszmjcCdEF2t\n8Eaircasm2BP2WFiJuGc33kr3+z9LfrHP0KZNjUgl4GARIqAQFNDcrg+wPFPvZAdOx4iKsDhESoQ\npMXNH2bTwb08tuMBhiJNCBZFxQSOwhC92DJYT5h6/os48JUvkmAIzUHaWY/WzAKjSHJXELRg60XP\n44dfvx4noRwYxonAuhPN48nyTxHWWmIdHRNI1XX950f50R0N/vyvD3PGC3O4Gk64ps3kO2/kgRvP\nx953BoNC0g0WW3qcd/hgSXSE8h5R9jE2J6lv5ORNV3DrT7+Crxt8ENiyZHgkZjnr44xHK40VgSxJ\nuPnue3j5uWeTaEkZKjvuUGSUC/PkoiRKGmzZcDzrJ3+P173+vfzzl2apXRWY+IPApz53hNddu5Z9\nj8VVH2elLGo2GxTmKAfM4T0INE6CQeBNSeR1RT1yVDbf3qC1RktFTdbxdUFhYH66w769R4iTiNGx\nISZGh2jUE4YG6yRpQg2QUiCDxAZDUhvGOvu/DtaV60mDjr/6r59mbnoaIQVCa6QU1NI6A40mUkBZ\n5mitiZUmiqKKVawUWZlh85IkSjiwfx8iCpx5xll0O91q1lzdzYUDH4S3Qvfbx/Ptp0xiBpvYvODc\nv/h9Hnz7+9j06pew8P2dtB++HaWqxpt3gnltSb1kzHsWI0F83Ens3bMPKywj1pMisdKTWoGQ0BcR\nVkDkHXUXWIwFg0+9gOXlNsP79oD32E2nkx/cRm66aFfn0AlbOPHEk+je/GWWU40Ojo6qUTpDLkqk\nr9PWns8/+iDRCDAP5569lm6RkkYxnmoK0YpqvqUax3boCN7wxgV+7WULqDeyCjHvLzZz5MsX4LuK\nBZeQWYlXGiFiCutQcQ1fbyBrI+hmg1Z5mO07f8iOpe3UU8vQyDDO1+jOe+6/ZweFrYGKiETGbz/r\nEsaVINQSWt0CHQ/hvCavxUidkgyMMzK5lk7o0zPf5LWv/Qa1qwJsgtk/k7z22rU8+miFbMVxjPce\n48IqodP7ikEQhEdJhdAVqiK8QynJUBITnEVJgRZUqpoIZFwBM8FLcm8og0XYgMBTH0gZHh9gfLhB\nEmmSJCLSmrgek5eev/vEt/77QccH9u/DOF9J5yDxLiCloiwMIkjqtSbeVdBwnmdkWcby0hLdXlb5\nnofAxNRahgZHWVhYIiCwDrqtkdUDfpIscuumOhe8680UvR73/PGHeN5f/zmLH/sG3V13UUMQnAdb\nUURe9v73cfnn/5WlKGLCBezO3aS+Qt96kSMOAYEjiAwnbFVC4MhjaEdgTziRqZe8gme9+hpmTUab\nPotHdkEMJBE9rXnucy/hsTtuRujKqEk4Q2IycAXCCyJvOO/CKaIZYBzmOoqyr8CvqN8IsSrQ4YLH\nB0cAjIG//5txXvtbx7H/Q2oVYj7uB3s56/duYOCUI2gtacQaLcLKoFc16eD6BT5bImR9hpNJzjjx\nmTzvzKuZ0FuY3b/A/NI0rljmvLPOpBYLgilxKO7avou2sYggGailKzoBBSHv4LIueW+J2YP7acg6\nNfVL3PztN5FfL2EfTPyB558+c4hNmw1pmv6c10vFIq94YpGqVGa8W0FApcIGwVK3T2YcxUoT2fgC\nscImTwnUEQwozaBOqEW6Eg3MDdP7Z7j3Jzu540eP7TlVbQAAIABJREFUcM+9e/nptv3s2naIuSPd\nJ4zZJy1Zev2SflZSmoBSMfX6IDpKSWuNVRkkrTVFUVAUVSe3Vq8TRVElwOA9LnjS5gBxvUHpA4vt\nDrffsYcwIWARItHmxb9yJbd//AsUcWDN8iJFEmi0eyi3WA0UKYHzFkHgpu/ditdNfJogS0tifaWm\nEiwN48hLTT8aoqcblAGUEygEifWkDrIjyyTTs9z5uS8w6iMaXjORF3SNYKBrqWP5yT/9K+Omz8Fo\ngHJkEiUCIthKlEKDDxlRtK0qwbbAvj0KESByj0tCSSmPqj+trsLBV/4o9/8k5aorj+MrAwOrEHP6\n8ozTX/wdNp59kMFmjUSritoTxQTrkD4Q+n3M0mFCJ2egtp6x2lYuPONXOOXEZzA6OsaajeOMrhnj\n9FO24KXHB81dhw6TCYHJS5yxRMIS6cCghsjn+H4Lmy3TnjlCSszy4XP5zvf+B+WX1eNnmM/Pcdym\nx5+xtZXT8VG7CiEedxI7Vj8ZwMealje08pxWmVMISY7HBIFzHh8MmkAqBbVY0awnpFFMohLq9TFq\neghTCBaPZOzefYgHfvrflHXc6ffJjaEoShbmF5mfW+Dg/oMcPnyY+fk5+v0+nU4Hay1SViZHR29U\nkiS4ELDW0u31OXxkhvmFRW77we1cdPFzyMthmAKmoZhYZHHQMGkcOkm543+8k1a9S61IyIuCnrcU\nCnzHwLe+x9df/EImCoeJBFZXSpkiVN36iVddw1M/8EGmXvxKpNYYPNKDFZJSCga7Mzz08fcSdj6I\nCCXLEcyowPpLnk85NESeGBK/BFGdC97y+2y5/BUYIhSg8GgUkVasP7EL24GT4cCBlHqSVqWXMav3\n79g5d1iZ75EC8HRzyXv+cA3v+Kt1LN4g4RTgnTBx9qOoWBLHEi0VmAr6Dd4TCYfIWpj+YaLYkq6N\n6fuIC8+9nGZjE2KqwfjmDQw26tTrEcJp2mnC7vkZTOlABKwrwWSoUJBKhygzys4yi0f2sn/vLuJI\nUyw9nZtv/C3sV6KVHSbwqc8eZvOJlhACURQRRRFQaSUf7ewLIVBao5RaHS3IvQel8ElMLiRzfcN8\n35GLhK4LZNKShZLC5BVLwpbgPVrEDMeCsUbEcAojdUWt1qBWT54wZp+0ZGnUYnww5KbCwLv9jF4v\nY2FxmYNH5ti5Zx8HDi1w8NAcB6aPsNRq0+n3MCZnudWmVWR0jCPIlNtuv5O1U1OsnZpi74HH6PVG\nVhGxeq3FaZe/gPlaJS4+4CJkHOg1BshO3cTFX7iezklbURhUltEUknY/p8ATGQ0qo5QJtbpiYnCQ\nfGCIKBMQJEFblO0TbAkikKqUyEc0TWC5Zok7cO5bf5eNl11OS9Ro2ohuImn7wMz99/HIlz+LyQ3o\naiTWFgVeRJx6SQHfBy6CuekJ1kxNIIVDr+wi1jmcrxpociWJBBWSSBDIlV3o1lsavPPtk/BKYBvU\n1rSIhUE4cB6Cz2jGFm+zCl4lEHpd2o/tohlqDI6N0urEPPMpr2C4voXR8Y044ZhoNHGyD7ng/r0H\nmGv1yUxJ4XqVSDgOsDRqmlgaUm9o2ILlfY/Rmp2hzJ7Ov335/1jdYSbeFfj05xaJ6pKiKLC2Spyq\n/2QhBJz1lfJoMEhnKTGkIvA/mXvzKLvO6k77eYcz3aFmlYaSLMmyPMmzjDHGboNtDAYCGHAIhIRm\nSggJNGOToZt0EwKZSCBAAoGEpBMgNhAwZvSAwTbGs/EoS7ZGS6pSjXc+0zv0H+damHTi5Ov0t8xZ\nq9ZV3aqrqnvq7PPud+/ffn6JisEHCC1RUoMULHT7rGSG1qCoKDvGkReWwlQDb9oW9NOCQZbjTDUq\nHQmIZPCU1+zTZ5M3VJOW1iCo7hhplhIEwXA10YDBBWG12VcJa9ZOY8wAryylExx4/AAPPbSDKNSs\nm1lPYQ0XPve5+OBmOGYPPACrL76Vbz/QwG4/nt49u1k1KJl3CeN5xvyRHnVgw8aNDB5+pMr+vMU4\nRyADFgIAQb1MyYTggc98mv7XPk+00kNYSyAEljHaTjOhNSrv4kJBKQWhC2joiHv/4Uomt51ILjqU\nOOqZxsmM/g3XMsKAuZFRzj/nWdx903Xkq6eQ/SOc+IwUvgd8Cu74jVHGJyc4fHiWIjfYokRHlRzf\nW4tQ6mi6IqQcstWqqVNjPY/vqcHxwC6IJ5fx3qHwOFtirCWQEAnIC0ukBGk6wFnF0s4HCdesRwUx\nth8w0dvI3T++jlZnjjz3CC2RJsfHgtneYZwYo6YkLoIg0qhQYsqU2AtCZfFOkrdTirQFSUIy9lyu\n/lqPF135t9QugvW7POedn3PDd6oL1rlqL+kALw1BqBF4EqEINQhXYnJBGNQQtsS7AJTDe4WUgsIL\nSheQ9g0aTxgqNKBltfk3tuKxuUrCXs2xy6cuHT+NkD11lDRprMFaW6VarrI+ELISUSol2bBhhjiO\nWFiYZ2WlxdJSi69f/U3m5xf5r7/1W5x+xll8/otXoVTAD2+5GeWfD+8EPgBb8zvRo39H8+xn0Nu6\niYEvmX71a1nWgjX9gq+/4c20fvAjhJA4Ac4LNNCZXsXz//JPmZ8YoVBVNaVmu0zPHqKZtYASWVr6\nm4/hP33yo2x52xvpR5LQOlwQEXqFDRyj84fh+hsYHQzorV7Fka1baMUWp9oMAsuG57+Y3gmnkoYN\nanaMbRc4gnuA42F3SzO7lNAbFjiUqjb6OI980qTjE+fxCfX20TxfBrTmNZ1AgALVNviwhcQiqGQn\nWngCLMYUFaZVKSg9ZbvD8sFdBNkCupeydeo4nrH5HMqOwhQGm5doNOHoGEtRRseUFBZ6JmcpS1lJ\nuxifIVSBMDna9qnJlHrZorPvUSLpaS1u45rrxuEFwJ1w6mklahj8Ukq8pCqElCXTUcyWiYAzto5w\n4QUzvO4t5/O291/Mr7zvQradsZFQ1gCJUgLvQctq9UdJSuHpWUs7z2mnGZ2ixCDJHBS+ogVX9Jin\nlrv8e23y9gEdqjHF0nt/zn/UKq9KHTxTkxMcf/zJzB48zKCfsuW4jSyvHGGk3kBrRaPeRAhBt9tB\nBwnfu/5Gwihg1dQEzjqWW21e9KIXsn/vHlatXsXBQ4/zjasTjj1xM2d+Yi+8BH7ttoPc9dhh7t4z\njdq5n2e85IU8fMt3CR/fx+qWx4WCPPJoE+O8Q2iPzwO6OmZqcgvl8kM4o1kKDUnhgZDQ5DihCDfN\nkNYSkjJASEk/cChvEU5RaksooedyglLgu47nvfhl3PPZRxm4LrGs0f3+bRwcLOGZZ+HIApc/t1fJ\nRC6DH96UkJUlVsDUqin6rS5eCGxRoKMQ+yRWWrXpdcMPiffgZCUd2bsn4PQTCtgFenwZvxwT6gQ5\nhH8k0pOFEWWWUqs18CXVGPPA0p89RD1apnnMCey5f5nnbX8xHXWQXjRLfXyaHTvuYP5Im5qL0Faj\nnCfLC7wxRHFEpAPGw4QECJTClCmmmGf+4CE6RcHhXSOwfQW+BKe9pURKgTUWISVCKpwzbJ6YZFxA\nlEM226GTeOyCZGpkE3GQ8vI3bGfhcIvPfexmysJUED4h0VSdey8l1ku80pTOY4wntQOEh1grpIBE\niUqL9hTHv3dl8cBzvPdneu/PGT73hFXe8cANw8/5Z1Z5LwD+QjyZJDE8jt20ge1nns5xmzdy+OA+\nanXNiScfy8hYjenpVdTjGiEhRZEyPz9LZgxf/uo1EIZc9LwLKdIWv/3ed7Jn1wOsWj3OgcOPs9Lr\nsf6YzRwzcwwN/yl6l54EvwLipXDm6Z/g2Gd6pIh4+MufxRYVxCDXqtKWOUNbCsKaIpSCseVFvvG+\n97L02G5yadAb1tAwktgD0mKDgEAEDO68hR/+zju4+XN/jjAlganm1LtrN2IHntxBKBKciomyFnf+\n3cfQSuFlUE0pdg+hbZfS14iiiNOf24dvVcFy040xzlZwDZ0ECOkq+Ll3eFtV8J44qg2/HYJvql6F\nNSWl9OzfGxxNxZqTK/hQkaiSwIeU3hKHmlB5UBLpLCOBpz6WEDdWky6mhJ0F9tx3PxOhI9uzl6ml\nMU5wW9GLfS6+6AVc9pormLUrLPsBaSlRMkZFDfo9y8J8l4Pzh+m3VxBFikeQ+oL20iEGPcvKysmw\nHbgbTju1ROKHchcIJUxGilEpCGWJl45YR5iup78k2HH/QR59ZJYf/OBGjiw9zKvesJ0glISRQQqL\n0p5AKgKhCaVGIyuYoJI4oXEyoOs8HQNzacni4P+dNuyfJ3QvobLIY/j4suG/j1rlee/3AU9Y5f3U\nsXHjpmGJ0LJu3SQz66ZptY5gTbVZNThy75mf6zE/36PXFbzspb/AJS98Cd1BwPiqLXzjm99n9dpp\nvnvtdWzavImlI3P0O0vMzu7n+hu/z3e+ejnd3xiDE0G/seSCl30evykn/fZ3WX1wH8K76gw46EQN\nXn3ll6hd/jLKMEHbjA1LPabKEhkHtFsLQ1cxRSRqaAtpkDPV67H58UOsHbRJjMVhSFcfy3G/+RG2\nf+ATBCJAiJJQeXTuqHkonMFOTzLQCuMNpXcYJ5jZXLBaGZiFwclw7x0RygPeo8KAKE6OmgXZ0hwt\nH/9z8g38JM0tjWHfnmGw7AQ53UZbT6mqkV7vPGCIAl2R+qWnFqtqyvOkrbgwIJ1rUy/nOeOSc7nv\n4C5KczfPuvjbXP78XRy35gZq8Xd40zvOJFzdZaU3oFUWpAYWyj678wUWvWVva5m0SAmEQ1lHiGLd\n9CSnH385/TVAC1Zrx/QqX9EzhaXmHWubDYS3lFJW1UkJWiQ0otUceGCee27ZQb/VI+12sOEh3vP7\nv8zaY0dRXiCtRTwBPhegVWXSpKUg1BIlPYEAKT0yUJTy/7A+/anj/8vKcr0Q4i4hxJuHzz2VVd7B\nJ732X7TKO/GUU/j8F77AYNCnSDM6KyuMj4xi84JOu09aWG667Xb27t1DFCaccdpp3HjjD5g9cIC8\n32XzpmnWrJEENkeXPfY+fD+dxTkeuOcO9u1+lCiUdFqea796BcUnQ5iD5A+6XPrZ/TSLrNrQIdAI\n0IrYelak5+znvIhOntMNBFaULCUOU2jivPJcN2HEshAsBzWSoEkeCvLYkUlPVyuywCJ1ybQo2PvQ\nD+n7rPJWwVIIS5c+SoesS9ZSFCV9Z8i8r0yQLuxVPpiXwp23J1irsVlROQ4oQX1iZDgGrJFaUhTl\nUdLN0T/osMz6xJiwc459ezWcAOyC2syAfGWZWk2hdISzgjAWjNUThIwxQLd9kLpfRizsxdOhGM9I\nI8NCP+fhIuVZb7uTbffuZdvXHuaiex/mjcVuXlS/mj/5k0d4yx/uYs1xPdoDw4Ful6VGydLMHKe/\nbj9rXnArfdMmsI5Bv4NWgtn9j/PYrjqcCdwNp55msdqDL4cuypYkCUjCClBROocxGaNNyaU/dzYX\nXnIGW04+jrVbtjK59iTGNqzijz/2P0iSFMWwMuctzpQIa4i0JAkUgYBAV4NikZLUdESk/9+Ujp/t\nvT8TuAz4dSHEBU/+4tCD5al0M//H1977nvcRx02u+srV7N73OAsLSzz6yG5S41nqdNizdz/9fo8L\nL34OP/eyF/PAw/dy0rZNXHv11ayZbDIxorBFm97yMto71q4aIRAlY8069VpIFFWnquhv4Pabfxn3\nFQH/CKPfW+DkDy1TElB4xe5YMioSEldy1Wuv4Au/9jpiN6B0HGX2J6UmTKsusUVwwXvezks/8Rd0\nJ1ZT9zFGKLAB0kiSQlI7fIgfvev1tL7+JSJrCJSk1xjluNf/BiqUpELAzn2ouALllSbHK8Ppz8uP\n7ldu/kFMKH1FfjcOTSXzQMvqbmkcYaABhxfVx1GM2hNNu2Gw7H/SypLrA3TwLByeJXI9lM/IrEeK\nDO8HPH5wD1meMntgB4uP3o/PV5ChoCY0xeEFNq07gck1Ar4ITAA94Hrgg8CJcPrnl/jt37qXX/vg\nA7zk1St8/ONHuOojR/jFlSUuay1z1i99n3Z3iaDo0jp4mI1rjmFxYcPRVGzbaTnSeeo6IFYOUxaU\nxoI1NGsR8UiEr3see+RuHtt3JwOzxHKny6G5FnnRQAermFs5wJve8ja89kSBJNSCIFBDFYDFGoPy\ngkgG5GnBcqvDcmuFXv+pO/j/rg2+9352+LgghPgqVVr1H7LK+5XX/jxpnjM7N0vcrFFvjLBzx6Pc\ndccddHsDtp9+FhunVrPUWmbHzgdpjGjWNWfY+LpXMX/kcbRWtNoVCURIgXMlzWYDay2DQYq1njAI\nCMOA2QObuH/sBZzxN9+Gt8DYd7okNqW3eZqf+5OPcPcb3km3O8cqmyC9xsrhrD6OZilIVYmXCict\n9bIaGjoSadxEk+DQLFqFeJURiAEFTVxZkviicuzCMrAJJ7/stehzX0b7i19gqmyxUpeoQuHXTzGY\nfZzJGE45JztaMr71UzGlK5msadpFl4nV68EljM9M89iPdxAN53dKwLuhRaDzIJ48EFdVyfbu0XAc\nsBfWrrP8w90/4MyZbUyPKLTNWVg6iNcBVoeEgaNbFHg8S+15jswdYsNZ28hshvMlW2ZmaEbDqPxF\n+P6RdYzXBeOTAzY0VhB/CpwGZ71mmbPOp/KNu5kqaf86nLVzwLdqh7BLktBr0vYY/c4WOOsR+Cc4\n8U0lmoBIVWVw7as5/aAmqI8q1GhEHnlKA+3FDkW/JGk2EFJxJN7Lls2bqY8ucfkvvJUvfOnzmE5B\nXlikV8OUU1XnyFZ+pGMjI6hAoSOJQrO00v6/DxYhRA1Q3vuuEKIOXAr8z+qt/99b5XUGKbkpmdm0\niebICLfddjsXXfZ8vvi3n2PLlpPZtGEVM2vWsmrjMRgd8PBdd3Lqtq0c2LeLuSOP0+1kjI1OEsaC\nohigdUiel0RRTLM5VtlRCIG1EOqYfjeBNnAStHZHaK2ws8uk3Yz66BiyG1IiMAqcKymtZK4e4nKw\nykAgiEUAEm7/+KeIknGiwRy5NMRWU0pB30MiJda7yuLBOUDRcB0e/v7XmcEQm5zYCla2HIOfKzlu\n+7HMf/sIp53XR9/lYWtVMp47FBDFkhNO2EDPWRyKtJ8hvCeqxxTdDIxFBgpjHUqrnzJBfQJACDAY\nCPbNajadbgj+BD70hRZv/oWHMG3Liy84j7iEfpZjsxW2bt5Ez6QMBo6DRxbQWUlDeKJQk9mSsbpk\nMMQTAdx+/Qn058cQwrFmZsAFVzzCtnc9jvgI8LfAq8H+PfRcyGhcwF/CM15+gK9/LGa1VvT2KWZO\nHxluvkDIAIMgTGrg0mp1FCUqUgR1QTQqK5JtLlmnJslMielneC3pt+boLN3P9ESTVdOjOFMVRKyr\nmrlhEKO0BCzKR+SmZJBmhEoRyqGj21Mc/56VZTXw1WFerIHPe++vFULcxX/AKu+RHTspJMwvLtLr\ndvjN33wfY1OTXHTZxWzadCxL80sstFusCjbS6xyhyFpc/+1v0u138R7GJxr0+y2mm+sAh1IVuqfI\nC9JBRpIkFRTBe0pj2LRtb3XX3g61xkXMbdasenQPt//6r7MqtVgCHBVD1zuBP+eZvOiNr+Uf3/s/\nseYQVtawVlJoy0g5IEn7ZD5jEGqUy4nPv5STX/RSHvjUJ3B7Hq3epAAvPamUNHbvZ/nwJ4mtpJ1Y\nXC45713v4ZaP/xYTzjO+YYg9OgUe2RmiBAgpyIolpA8Y06NkuaPjc0469STuvvNuAjTCC5ynciDQ\n6qcsOLyv0EbWWj7yh+N89J8WUJfAiZR85soV3vJLq/nKTXcxMzlJq7fEZaechCwKXFlQSk230+OE\nmQlqUlOoOjrPqSddkloA5NVb9BZpS5ABi4ebfO2vz+HWmeN51iseZfTNfR67bx0PfegYJtf1eM07\nfgDPgefu6vPdr0c8tHeO0cxxrizgMLAOktpxvPyKS3jsntvoze1DBQ4ResK6QicG3XBI7dFa4gpB\nvdkgLwqMB5uWmO4hZqbPw5qYxkhEZ5AhlUdphXMl3oCUIFCEUUh/0CWuJ2jh+LcE+P9msHjv9wJn\n/AvPLwOX/Cuv+RDwoaf6f2UQc8fNtzOxaooLLjiPhx98kPvu/zHHHruZNZNrKPOUqdVT3HHzjezf\nt48iy/FAPaiBqCbcVBTQ68+T1EYoCks6yDDG0m736PVzwjBEKUVWOCZWz8HdwHvgvquXOfFN72LX\n+9/PWFYBELzK8U4TmAFaxyy2VwhVQBqCKErUqVuRU2vh+uvIQw3OVoZC3lLqGsed+2zmm6OMbd7C\n7IH91HJPNwRkSjNXFJFAZg4vDaWt01xcoPONq5joOOZdydxDCVwO/CVs++9VGjRSjxgZmUZ5jyks\nVpdsGV/DwX6HTZtmOHhgFmMDFAFa5hSlrWiN3uG1HMpeqhvJDd+N+e3aOB+6fmUYMIZP//0RXv+a\nCXavLGMCx32HFqmvDxkM2tzdyjjcanHathKXrieoge96Zk7dS6P5E41aLdbkNY8znsJItPIcOjzC\nNZ87Des9cXOMPBuwuGOCw5ePsm57m/BKx+9+4sc8ev80ux4MqTfnYBZYB62VJgjLxmM38uDsQXIM\ntUQhopKoWcNKgS1LQhVgwwIvC0TgCYQiMAHWzzExenLlP+NB1SzT9Yj1G6cYm6wxPjHK1q0n8s1v\nPMztt9zHmpk6pjuKCVrIfyMcnja5y9333kevnxIkfa768td4+SteRj/NGZ+Y5JZbbuWcc87hmmu+\njRn0CMKQOImrtEYqwjDElqrC5siA/Y8fRqkApSrKYVyvsbzcotudQ2uNdQXTa+fhHmA7dD+c8siV\nH2TMVENOTkLiQpaiBjYYEDjHyP79/OOvvZUxK4jCJqe/4j9TjI6w96FdRIcWKYOcTIBFoso+t/3Z\nn6In1hIs7MUEPZwRhCLG+ia90BBah5EOjWSAQ5U95u7/EdI7JscneOjHS5TbINgLmyYM46sMxhYs\nLRypuvbGsm7NaqQyjOmApDbG8vwc3gb0B0VVCg00uflJufSo+9YwNfv6V6spxA9f30JdAic4w7dv\nmOf66xK++Z0G99wwx6HZIzhtaas6A205NOVoRrtY2zzEua9+jOM3zMJ7gAVgGg4s7qa1MkLIKKNB\nE4xCxpocixGShVabTn+Zbt/wxX8c4d3/rQ0vgdGPDTj70n2c/bx9cC4VG+1UWJztIMwSq9ZMoEfr\nZL1ljIDMC9p5n6BW2ZSLJ+zUpQDr8dLhRBeJohkehzU57/yvr+RLX/0iKnCYcomF9jJL7UWCsMZf\nfPJvOHRwkfe+8420TBtrBBOTE095zT5twbL/4GFO2raNpNHgPz37Cqxz7Hjku+RpyaYN6/nSVV8e\n6p0ERVGBBSq/lpKyP6CfZ9QadXqdFcKoQZrmmDRlYWEe51ylXBWSLC8Yn1gkPmyhCZ0oov9Yn03G\n0RkSMJ3wLIw3uPijH+Huv/l78u99j1JmTArJ4VhhEFz/O+9jevt25lu7WKdDnAgQOJwQuACarouY\nH+BUSZDX8IS4c5/NST//ZnZ9+o8IH36Qri5JCo0vLZkfkKgAtCYxcMRKdu6MOeUZGfwIztxesjS/\nlVNPOAYloR4lrKysYJxlsNAhieuce+7pfO+G2wl1Y8jasgSyQncYPM49OS2zKC351jVNvPf8wfVt\n1Mth9DPwilemvOKNKa0/hutuSPjOd2vMPZpVPQ3Z4oyXzXLGyQP0nwGfBN4A7iG46d4xygKc6NFd\nSlk4uIetq2bY7xexNUcwWUdFCR2h2LMwz4//IWfd5oSX7k2pPQJcC/wRwwQeeD2UDwjkYIl9B/ex\nau04jz0yS2o1ohQIQpz1WO+wpatsPLSi189ROqRed2w6ZhPOKj79qXey6/HrICgJGnXGag3aKwaT\nhzz2+C6IdrN58zau/Kdv8dKfuxjbMizMLzzlNfu0Bcvmzcex6djj+MEPbmLXrj0cOrifN7359cwf\nPsjBg4fp9QbUajWEE3jvKItKZJkPUtIsY2LVFK12h7yfY0yXPDeUZU4QKJCCNK/GTQvjOPaEfpWC\nbYfHd4+BCekHBS61EFQkkXpf0M4Ctp33bO649VYK59ClZeANIZLJWOLu2MF4EhO4Grku8FWXBm/l\n0B7WokqNDTLwOcefdCIDp2nFNdY4KEVJzQUELmMgNNrHGCcIuxlJPeb+e2NOeXYGt8Bp23Nuulai\ntacoLe12i6mRUZbSAVGUMNoYBee58DnP5LYf3oezIJQj1iGFs5giP7qiVPsWT71eJ8syvnF1naKQ\nvPfLbdb3HHwJ+FUYa8EVr0y54nXpT7pmUJWGX0FV2rkHdg3qfO8fxpjdG9Dwgr5xyCxlLLdMFp7l\nUUltSx0/WYBsMaYjRjckONfgpgc3cftjivXTbU44f5mTf6nD5tEB8i4YnKXY+bVR5CBnVEmSmuaM\ns07j4JG9lC6jl3uK1IFzhKbSx/VbfWQQYsqS0lhWrRrn7b/2KpKpOerRWpRSzC5k2EmPEBqHQ+mU\na777+7zqhZ9F2HE+9ekr+dU3vIKefeoO/tM2VvzG1/wi27adzvzSChMT40QBPH7gMXqdNuEw5XLO\n0WiMHL07SqEYDFKyvCAKQwaDPlJIzFC6jqgmK40x1Ot1lBIMen1e87oHuPjGx2AErj/5BB7+L4oJ\nHEvG8+K3v42rP/ZJGq6gkCHgoXQYbQiygAcTSyIcawoF0lHzilAYBkpy4e98CJNIvvG7/4NmaQid\nJFOeQARob+mHIcujE0wsrSBsjhEdwmPPpnnGWvZcdT9n/tUH2fHWt6JdyGyQsuryNh94yRx8GB78\naMhff+z5HHtMRBhFdFZaJLUGY9PrWb9hPaOjo8xsmiEMY3733b/Hjgf3Y6xDGI8Rkp6zFMPu/xM+\nN9ZVe40nhsiE92w7LefSywa84MUZ69q2CpxrgO6T/mBbgd+D2amAG78xwe6HIxCCIocsLem2NeX8\ngHX1E5HJEv2NA0bGFKNjIS6M6PWzIQ9MYlXdG69AAAAgAElEQVSAqI/gozHK3KGdptEYo3fg+6jZ\n43hsd59OnGCdQGcpvZUWJuwRNUoaY5pIg1YS7wzCKHJT4IHCVnT98bokSSAZaVKWJQttS6evMJnF\nFA4h4ZjjGjSbjjwNmJzaxMYNZ3H3D+9g/0MrXHvzXf/qWPHTtrJMTa7noQceZWbDDDt3PkAoJcZU\n/hte5Eip0FpVnhzD6TlrLVIElV9HUZAkCQC+KCpFrvgJobLf7xOFIYHWrN+4fHRz/+MH6rTWhtT3\nLVITEYe3HM+mN/08fOIfWFGGUnicgdiVdLRFO41TAi8EgdDgDU4ISkLa9YCwMUKuYjRdMpFSzyul\ndyFLktxw7AFLu9YnIqYbreasX3kXbV3gv3kv4tB+IKCjAnLh2HVrCB8G7oITtxYce/xqLr7kOaxZ\nt56xyQmK0hAlSeVtg6Pf6xCHEX/813/FR/7gz7j+698GW40MJ0kIeYF1DuMcQvhKWf1k703v2fFg\njYfuj/noHztOOa3g0ssyzv1fGRPjEVpXqWavHXDrtQ32PKywpsAMqrEKm3tMryQsJRbNzsV9HHda\nyOSYpjEeUR+JSAvDxESdsvRYIfFC0xcWIUpqtREGeYfMKJbNcaws9ch9jCur94hwiFpCnqbYzGLa\njjgWxEGlpiiyEqkD+oMUoRNsKakFHq0dPi8pXJUmZ2VJZ9HjixDrc2pNR7czIKmN0tk3y9yRO1g3\nM8m+nfP/wpX6k+NpW1le+rzLWbt2DUEIxvUqC2c81lX+IEEQUpYFOviJfMNaC15ijEVK+VMOxUVR\nVCmYqCzzvIdQV6rVP//M1cQzFnbCjx6Z4Z7vT7P8JctYCwpfJ5gKGZubpcwMRnpyX9JKRlh97nbu\nvuWHhD7nGFdDSEikJ7aG0BsWG6NkgWKk6/G+wGlH1yRYGTGqDRQ9MqXRRR8jI/IwQJ39EqYamn03\nXU2nVKwWA+ypz6QxNs59t13LB2/YzeYrSvgULG35M8bHn38U4O09dPMBcZwM51Ic3lisC7BW8rb/\n/BsceHQvohSk3qBViFOC5UGXyglvCFMXFYD9iSIAgJYQRRFKKV71qitwxQCNxBkDNsPmK8wd3kOW\nLZOImDIr8c5VJdtCspI7Bt5xxgWKVdMwMd1AR4rSSsqyKsyU1pOVnpYR2KCJsCGFz3C2QZm18WmN\nB37ocE4OrRIt1jpK0yIdLBOEENckoTaEga8c3KygdApkwNTEWmy6j9FxRzISEgjJoIReJsjSEukV\nQnlUoECESJlgvKpgj96zedXJfPj3P/uzt7Icf/yxHDq0j9HxOrjyJ4204WxGnmdH+yZHZ7ClJMvy\nqrNLUI3aGovW+igrt1LiSryzFGWJECXaVJt7LoRn/dwhnvXiQ+TvlDx6a5252xXZnQ0Ot0JcUQWf\nCSTHXHApU5c+k9vuuJt6WYkdE6cogKI+QmBgui9pBTlWByinKIXm9Le9HbVpHbf83gdZ2++jhcfL\nkCw0NG1OfscNHBEpbfoEhPSVxdc18wuzSCe4/66YzeeXcAuMnLWbHI+SQ0SPFyRBnSIrSZKEIu2C\ns3T7HUbiOn/x2b/k5ZdehjEFEVAUOU7BSBLTTzNsZdRZBY53PFkbK4TgzDPO5LTTTsNaEFGI8Bk6\nEDhbwdOnZ+osLc6xNDtPPhhGnLUUtlJvT0ytohbPEscpSuZoIsI4xieOtCjRvmqcRs5TWENR5sOV\nsIe1faJmQqELVBEjpMF4j1EKrUZo6Igi79NpLxMnJeOjNZK6xliFN5qRkVWMjaxnMT2ECkqCwDCR\nNBlTIYWVdHsZHkfpA4pSY8oAK0KEFRhnyAeOPenep7xmn7ZgWV5cBm85fOhxRhpNVKCJ6rUKVOAd\n1prhNGUFH3hiFl8HiizL8EMvxCiskef5T600YRgiUBRlinOez191Cq98aAf1nbbKx98B0QHHKc/r\ncsozgfe14AxYmdcc+XHEkfsC9j3wHQYPe5oWfBmiFVhhGTvvXLa87IXc97mrWNrxCMpZhJfgSzIR\nML5mE4OySd9BqgvwCVoKEhdC7rBxB2cddRVSCkdaBqR33EZLgiKn1doG598GV4J+930YP8B7gXSg\nZYCXCXEoydMcIQKipIJO9Dor3HT9tcxs3sTuhx7Fe4kRFm88sQpJmmMsdlM8Fmuqjf9RAaYUvO4N\nb0SKAGQ1tWqdRRNA3ifQkrJWDVKtW7+FVdMbyYcMskG7i40jwloNpSxR0saHKaXPCZxDlB4Rh2gJ\nxoEQBi01eVY5IDgjsaKPkJDlHcLAY/oJVmYI3STwDmSASAKmJiRr18Y0xzK89VBIotooG084k8Nz\nbRYOZkSxxBYZRSpoywilLEk9YVIleAEuqNEZOFqLGUXhaDQnWVycRyHIy6fmhj1tweJdSRSGjI+v\nR1iLwdNaWj5K9piamkIpRRzFDAYDlFLDgPFEUXL0uSeOsiyPKm7zPK+UulikFFzz1dV86+ur2HZK\nm2c+q8XpvzrHGjeoSpd3AV8AHoTxTYbx7YYTt8OF72zBmY/w3Ldq9t0bcc+nmyzepdh9171su+gS\noolxysJhY08pHZFVNL3g6vf/Dk57Vrk+XoQYEbHsHK7eoGkzhC8JhCS2nkIICpehvEKXBSqAe64T\nvO5jVCgjfy/aKQIZVuTENGPHj3/AXXfdy9LCEvPzC3jriYOAQb9PEQSU3rF+y/Hs3rWvKidLSWlK\nSucYadToD3pYX4AYuq15j0KydfMJLC91ybMKnD1wbXQYUNoaQoG0OZoaNi8IE4kKq7S4OTIOukYu\nLN6meJfgbYAzFoNFBAZhJFpWl5pwDmkdysmKI2BTpHC4wuF9QJRYbNuiVYT1EAYGKTyr1xSsWV8i\nVEpuUoJag16/oLewQrt7K42RkA3HriOUY8wfGWCyhPlly/SqCCFyalEdpQKkDpBJQBaCKT3ddhsp\nBZnxZN2fUcjem694FUI84WoryJ1BhkHFwnLVStJutxn0858KHqnAmMqMM0kSvJMYY4iiiLIsaTab\ntNvtKkWzBcbkBEGAtZWzlxaSUCpWr+uxaethjj2+x3Fbu2yY6aN2UhUC7ho+PghsAi6C3n8TfO78\nGZLeKHkhCCXkaoARgpAaVpQIb8ltgJISrwyZtwgZc/K73o8cq3P3Jz+GmN2JNBUveV5Dx5Yo5RmQ\nUYgQFQR86q59TD7DwrVw1Y9eyZ5HFfUwrrCtS0t0OznZoMRbz+jEOIU1pL0+gywjzwvm5w4xMTrJ\nodkOIIY9p2psNYoiEJ52L8UMXQy891zx0pdzwfkXIrxkeaVDu9+p5macxbgS7w2uNPiygkl4ByiJ\n1tVNzCiDNwb8I9RGHyROPJE2BEFEEGmM01ivSFNHbyDpppJunmJdgZCSXmrIBhE7d1rCYj06tDhC\nIjFg7cwKx26WFCan8BGprXFwto1dLigygWfAtjNmoBawcjhlZGKaB3fuIAg09VgwlgQ0mwG1RoSS\nMUWqWOl7lldyTKmw2tPpG3w2xg3XXPezt2fRsprmQyhyX5kX+dJUJ384Cz46MkKtVpU7s3RQ8cLw\njI6OEsch1tvKnsCVlJlBByGDIqfwllCHWO/RYYgtLc2kwVJ7iVAHWGHZs1ux59G1XH+NJYxDRCCQ\nwaOcui3nrBcLjnt3n3UzffRO4O3QuNJzwe91ue6/BNV4rB/aYtfGEGuOobu0SKPfQeAw0iC9RziJ\nkQXZcsp46Oh0lxm1Fqcs2laQvUIKjIfSB5QCIlOy8+4a553fhR9Cc2IfRb6FXruNcJ7SKOq1UQaD\nJVKT0T20j1o95nkXX8izL7iQ5SMrXH/9jTz84CP0UiokLpXuKVAaW5SEYcD0+Ci9QUY/TUForvn2\nd7j5h7fy6le+ko0bZ5ia3szywhLtdpdQKjLbwwuFiuKjujMAqTTS99DCUQqDZw15cRjPEXRT4ooA\nJxx4hXE51ni8U+RZgRv6zxjv8LlC4Aio4VWJR6IVNBLH1i2WKEnxeVZBDhFs2bqRB2/cTadt8Fqw\n0s5oZhoVCoJmwpaTzubh++7EFhKblWRFwIiBQDjKQURrJaOXVuPXSE8tqXHauS/ihmuu+9ev2f//\nw+JfPoSAWi2p3GZ9Nf0sZWVBgBTkWT6EF1R3vrHxUaSUQy5uTqvVwXtPkjSIknCYhlWcL6UkeVkM\n4XOVfL3b7xOEIcZaojAEJ5BUq9JgkNEdZJR2gn5rlNtvMYRaMjYecf45R3j1Zx6CZ8MZ97Z55FlN\njtyk8KFG25LcGLa/9fVkSvC9//4BJtopReCRTqKdAAcP/u1fYlXBeJGhbERXOQpvaQnJnDT0naUW\nRoQefCD58V0x5z27C7fA1JseZ2VpgiRJmFmzlsIpup0uI6OWpBzFWUe7dYTvfusGvvSVb5KlDmNB\nyQihFUEUV2ghOwTxAWVRYMqSRr1OGIT0sxJTGtI05XP/628YbTR43gtexOmnnc7UZJ10kLK47MnL\nEuMV3hrw8ii8W4gQrEUGEuFriHwjTpTkgy6BhsLmQw9RKF2A9Z4wivFe0Or2K4QSI7QWGxWjTUmS\nWp1mM2DzRk+tcQQpFKvqUwxKgcwdvXyZ488eJ+trhI5IGoLO4YzSF8zunaMxso64NobNc7KiRCqJ\n8wWRgEEvo93y9AoPOiQeVYyNn85A/Iza5L3jl18LDMkkQ2l5mqYYa8nKglqtRr/fRytxFLr2xH6m\ncrIV5EUBErI0I8tzSmNoNCvARRgllFmGc46iKJBaUZghtb+sKmhaVD9/cXERpGBsYpxYBZiyQAgI\nw4B6GPGr77yT06+dh/tg8Q9D/vE5GymMQ2ERss7S1BrOOO9sHv7m1cRliUQcRYwa7wmEJ8VjpCR3\nnr60LEvBEek5YAcUShBYCKzAYTnlTMdX/mgeXgmdOxv83SffwK4dj/D4/gP0Cz0sk2uEDKryrxRo\n78m8QwnJEE6J9/Zo2T3r91HD6peUErzDA1GSgNSkaVndmBB4BCqQxHHECy6+hDPPOJM4qrHUarG8\n0iMrq+lNay0/oSsYvDR4W6BdhjH7UeECzi8gYg8iQ9gIKzSDwpLmIb1OQVpICp+T9mt0W9MoBUom\nSAlapZxy8hKrpg5TiyZQQUJRwKCAQWmGr9N0uoooHsGaSWyRs3SkRW/gCMYS9u95jEYYEYWaMBQo\n5ygGhqKoMbAeWWsQjoyz5eTL8UGdv3jPq3/20jApJWVZMBjkqDCsGo5SVoGgFGmaEkURtTAYlpJz\ntKgqMtiK2i68rwatwoAkDCitxSHo9/t0O20Cr4lrNcIoYpDnhHFEv9dDC0WRDoiimM7yCiMjo0Sx\nRlAR3IULkNLispK+9Vz1+TM44QPXEZ/nmXqo4Kx3L3P7H41jgLj0TM4f4eC3v85IWSCtpKgpcmso\nlEaXkCMqHI/09ITDSc2cKNhvcpyK0FbgBBRKYETJ/Ts1/U1QX4GR5R737PoGjzykccahwwAd1smd\nRXo3hFBWQamReOdQApy35G7IUxaKkdFxikGf4ijVsoLLlVmODD2NWFEaT1o6UmNIRELeGnD1N6/j\n6qu/xYte+ALOOvscNmwcp5emLK2skKZp1QQWCuEMqBJBgFAC7U4iy8bp92JCkyLDBYSsmoSFE9VY\ntNdYIOtPYdIxkoZEuQCtE6RyOJ9RqwXUaiGxhiC21JOIJA0Zc56+SSgjTYxgfqGgrPVIrcE2AwgN\nziuQMaVVkFXN7Ir8GRMENaLQUuhJNm+5mCRuYJ7kTvAvHU9bsAzyDIEiShqUeYqWlUGm9x4toVlP\nKIsCaytXqDgOSdMUKTVCSIw1mDInCKpSsfcCKRTGWkZqdbSQZM7QH/RIsz5FWTAy2iQYdveaIyP0\nWm20kKyeWkW3u4xSgjTtE+iYwnmMdZT5gMHeGv/01RN4zWcegVfDM+9d4r6bJlDt05h74CHGvUeU\nIIXHaEfPh7zgIx+nVxi+8f73osuM0ngGwtPVmt15RjvylFoivMXIJ4ATlUVcVliuvaHG5a8awN/D\nJZe02PnAGsIQvBA4Y6sigrFVzm8KtNYY6RGigheiFFqBesK+WwjGonG6nS5ZkSGQWFF1WmxRUChN\nGIZo5YmNIi0qiUqv30dKxde++S2u+urXuOiii7jkoos4bsM6Bv0+S8vL9HKD1VVzzyuPkxUkfLTe\nwFGn092Dtp4wSCl8ZcOd5oJBUafbGyWOxmgkIUr6yjvUC6S05ANDEDgivUQkJRqP0CmiDs5LolLS\nTx29xNMYh3ZhkURIbXGDlCioUYu3Muj1cAoin1XWeC4ho0mjMcHMmmPRYQPnqv3YUx1PW7CAq3oo\nxoLzqDhAejDeEKihz6DSZNngaFOysnk2R6mVWmmEsBSFQQqFFArhLVFQ7WGKLEdrxcT4RLXxdtX3\npmnG8uISjThh9fQ0rZUVhLQEUUwQVsNSxnms9wRhiBCCK68c56yPjnLiK9qod8OLP7SPDzyvYFR4\nRBigvEQZQS48RRBzYNdeHnxsF1luCJSjl0Q87kpmBylZEpDh8F4iPVUxQAiccENrc89Xvtzg8ncM\n4NVw2e0t/urj6ygLy7ACi7ce4Ssv+Qq2V51PpSAIgiFA/CfKh9I4olpA3KzhBlCkWYUFompIWmtJ\n06ocHwQBgQ5o9zOyNEOFlUIgDEN+cNP3ufGG6znz9DO49NJL2TAzgxWKg7OHGeQ5Ell1x5XHupzG\n6FpGRyfIixXa7cMszB9mudPCyZjxyS3Um3XCMEI6T5kboriGdFTq8kjjTadyr9YGIRyu1HhfIoMQ\nHSmMlSQxDPKSwFkGLoVS4SjJrCVsHEM01kA4SWELnPMELiJRqwhMn8RHFIs94g2r8P5ndJ5F6xBT\nVmViFWjyQTrE94AZiiifKGs658jz/GiXP88L4rgicXig3xsQhglZ1iMMA9I0HV4khrIsiEebCCUp\nh2Y1zhkmJidoREnlZ2gdznuy/gDpNcZ5cJY8z0mS6nuCsMaf/vnxfPzDdxGd49mwu+Sy9/a4/o8m\nKJ0hKiURssq1u4vc+ZmPUChBQxl2Ws+86bOkPIO6rtynXPXbO+ErhsA/s1y45eaI2T+WrA0dEzsc\n289e4dZbRnDGEGiNlGooganS1ypAqpuIUmr4u8c/scyWljTLUEoR1WvUopjW8gpQBUsgBErJ6oyW\nJU57RhoJWmm62YDM5lQB6YjDgId3PMD9D9zD6tVr+NVfeStrpyYJ45j5pUVWem2kDCm8RyqJNxFR\nXGcyWkNz/JTqpmUMpW9gsVhbIp1HNgTSC5yzBHgyLUlTiSDGe4t3Q+yvKXDG4L3GeYlWIUpapDco\nryhLiUaSW0tRhsA4QRgiw5AoCpgMIm79/jW0Fh4jUobpdRu4cP3bUeJnNFicrwR9zjocDj+Ukhtv\nwFd7mmy4QX+yzYAQCqQfVr+gLAvqjVECHYMQWFMepa9HUURRKJI4prQFSmo6aQ+lNGEQ8r/bO/dg\n27KqvP/GnHOttR/ncV/ndt++TdONQKCB5qUgKoIiLYLVAoqvpIpQmpSSEk2qjKKxYsVUtBBjUvGV\nSLQaIiooD01EBQVpVFqFbmho+qnd9O37vvc8995rrfkY+WOutc+5t7svtwj0MVVnVJ06++x9ztlz\n773GnGN84xvfqNsG24V0MSS0C1mlYzI75+agwqhYon54P+9895N4w2/cD98DN912jpvfV3L6PsuB\nIjI2BYKQdJlIw4l6wvkSkg7whSHisZqQZLABogVveFQdxJSE9753gTe+fgNuhle9boOPfniBonII\ngkgmbMaO32WMRTXhnJuPa+hZD95nWVRTOGKHEFprWDl8mPWNdeq6prL5dEI1Ey29ok4ZDUuKwSJb\nky2atu42MIuIpSwLzp47y3/5z2/lwKHDvPq1r+aKK6/k4MGDbM1qzm9uMmtajGmIYrDVAuMqK/23\nKZLciBgTbVMT6ibvfMFjxNJON6nsCD9ZJsVVAhM0KMa0JIkYJ6SYiMFCEIhNbgu3JVYskgRmIF6J\ntCQ85ahieTDmrls/Sjh/jEXbYN2YQ4efiaQcyl/Kds1ZfMq7uTih9S0hCEEsgkXV4+sG5xzW9kLh\ngrGWEBSViI8NYDHFmPWtLRaXwJUFdEXNGGMe7Fk4Jr5BCWx1mmRXX3UUUnbQJobcKy85/AKPWEsI\nMfPNyI81W1MYDnjP71/Fy//nCa6+YYr73/Csr5nxu/cts2YlD9MR8NRYHGYwyBuBJiSBaEEyoBJJ\nRb42jCrMBSZ2WFLe/a4Rb3znBjwLXvzWTfYfCswmuVVau8a4yhqKqiJ6T6u5MFqWZab/qOBDzLmE\n5H4bRUAsXgSvSrm0jJYVs60tnAgDY7EpktTQhkAKAVcWHByPmTrLrG0xVghENAlGla04ZXr6QX71\nbb/E0uIS3/7a7+Kaa67lSVdcST1rODddo40GayqmWpMsWW0lCgYobEk1iqABbRo0zSjHlhgC6q7k\n3LnzLC80FEXEupQ5aSFBKkg+kUKDI+BMhciE0gwJoSL5JaId5bEUzlJS8MAdt3Py1CcwRrGSof+n\nP/9FJDfAmX+kIyfquqZtG5p6gjWOYTXAWZdzjW5Q5qzxiC2xRW6SClHxSUlqwBao5ppBWZbUszwQ\npwk+V7Tbhs3JOrVvqeuG1fVNVs+e59prryWlvAPHGHP12Yd8IqRMLtzJxoWOSiOCj4H9+x2Hl6Zw\nK/By+PhfD2k0MRMl2IJgC4x1qEBMoRsylBNpMYKRjOIBjxDI60273f2++wo+daaEr4TyA3DjKzbz\nEFRNeTyECG3K4VWgI0gCqevviTFui2ynPNbbx8wU9lEzUVEVnCNaRwCiMURr8SZPYkOV0Hp80zAo\nS5aXFqhc1jTOvLhMh1cfUB/ZWt/gbW/7H7zlLT/HZz5zB2LgqpXDHDl0AENg32CJIpbEGmKjCBZj\nLZCVeKJmhZekA4qqYrAwRrieyXSF6SxSN4a2LYhpwNQ31MHTxEgyWUvN4NAkTBsIuh/siIEbsm/x\nKk6f/HvuveuDDFpFosdTcejodZhqAZJFdrDYH812sShpKUvDi77263j2c2+gKkf891/9NTAlW9M8\nowOB2mdGcgRaHzC2JKUAVqiqAaFj4NZ1jQ8tGKEsK2atR8mJ/9Zsxtak5tDKCrPZjLZuiM6ByFyJ\nXlNiMBhkloAqReFIKQ9MEhHatgW1POs5Jyj/DHg2fHaj5HP32m4klWC6elEy20J35CI1mhQ1WXXF\n7QiRgAtOlZ0hp6ry7nePefbrW7gZXvmWTd71roM4sgK+oFgRoiZEMzXFdeBISgnryu0+fCCIgAoa\nFN8VIX1XvC0KS2kMhTHEkLXYNEWsglFIEvP91mb94CrD/d43GOOIMaHELPJPZG3tHDe//Tex1vGd\nr30tz3j2c7jywH6i5ovu1PQ8RhwxGVIUrGaH1o4866rFPEXBBQYsM9ucMq1hOI5Ug0ibJsQEvoXo\nLa2PhGCpw5CN9iDJHcS5KyjKMcNqgbWH7+XeT32YpdLigqKitMWI53zdNzKLCWsm3H/HPZe8Znft\nZBGB1s944QufyzOf9WSe/ozreM13vJIYalJIWXElgqrQNCH34cdE1EhE8T6ysbGJbzwihtb7XEOQ\n3NtSFQVFVTFrWjY2tzh44FA3yFWxzmWUqx85181p9N7PSZg9g7kfiRFiJITIy15+Hn4L+Gfwnt8f\n5ddCrtSnEDshN6E/oRRA8ofTC3P2DvFoBeGdjoII73vvkPZVwMfhGQdnXP8Miw8esYYkQhDFlI6Y\nj8Nuk5H5bh1CJCXNY+Nioqk9k+kMRFlYGDEejxkMhgwWFjBlQRDBVAXRQqJLjDRhjXQ3I75tiKFF\nRKmqIk/jspbQNETfnTIaaX1N3Ux53/t/n5/6yTfzsY9+hFhPObA45ElPvIblxQorpnv/BKMWawqc\nKzFSYGQIMkRdyXD5Kzi48iK2Nld4+GRi2pTMGksTCupQ0sYBM11ifbpEHY9iiutYWByzvDym9at8\n5u/+iEXTYtThRYnWsHjgKAcOX4exI86ePMZnb/vwJa/Z3WMdm0jpCj74gQ/w1Ke8kZgiX/3VL+Su\n2+7gU3f8A7NGwcQ5KbJpsrSRbybzbsiyLClMxZnzq0TJIijtZEJhHVYT58+3TH3L/sWDDFxBPa2J\nMVJVFdOmxbmcyFtrsV1X5vLycp5t2EOuMeQ6TkwsLU94znUb8CFIvw7vvzF3aiaNWUJVFWssGrIT\n9m4TJJMPjeQiYdpBYHyEw5htbXwBzq0mPvRXFa98bQO/BTd+8xnuv2cFiQFjLYUraOp8GjrnKKqC\nsqo6FX3DrG7Y3NxkcXGB/QcWMIsls1lLiDWqPtdFQsJtbWKGFWXpsGIohxUSI82kJmikiuBMrzVA\nnsYVA0lym69xJbZ31pRQciipChvTXKz84Af/hPe+9328+OtfzI0v/1YOVWP2jy3nzq8yaSPWlngi\nxhoiTS4FEGhTyNOPy2UOXLGIT6tMpudYX3+ItUnLWm1Ym7ZEc5RhtUJVLmHtMkEr6mnNJ//8f7Pf\nGbyf4W2LtUt8/Stew/jKa5mFkunWKp/42J8xml26U3IXKfpZSvOBvz/G1tqEfUdWKK3jhS98Abd9\n+r4u3haWlpYy7aXLMUzRwZvAbDqjlZBDAt9SlQUaoQme0LRImdg/HjMoS+q6ng8v7WN56AcARcBS\nFEO2tiYMqiEx1rmW0w2BrYqSb7pxFfNe4JvgLz9bceKEfcQF3/e3b6N3csFjclEy3z9+wX073icR\n4d3vHvDKNzTwRnj5h87xP35lhdTBqH2bgohQliUhBI4fP05VVQwGIxaXxgyGxQWt2aoG5wpihPFo\njOiMQyjTGNnYmlEOKsblEFdUiBQ00wkhZUSNECnoqploHgAVfBa+s0rhCpAhTdsw811/fNtijLK5\nuUFVDfibv76FW//64/yTpzyNV9/0Kq5c2k+xcpj16YyzaxNCTBhbMGtnVOJwriRpog5KG0YMxwdY\nKJ5ESE9g4eCQfa2wOWlptSRGIVGBVFuj+tsAAB1aSURBVBw/u8Htf/MBhjrDhzVMNUJHR/nuN/wA\nszBiWic01vztLe/nQOV5+Pj5S16zu5ezGIumgo31ht/+X7/DP3/j9yN+nabeZDSq8LFma7pBU2d0\np6oyUtGEWd69gMIWeV5g3VCOBgRfk0IgpsRGaNBQs7xvgdn6BGsryqIgaew+PEOMff+C0LYeI4oz\nBSkBXdy/c5LWjd+yCj8AvAne895RzkO40FlSSl0fjZnXirpKI7CdvEO++5H6xDtNEYE///OKc2+B\ng1M4/HDgK18w5fZPHsDaDK/3NSgRAWtY3r8/306JppnOQ8r8Gl2GSJMgWGazBmdLtmYbXPXU6zg8\nKDh+4mFiVApjGAxHjMcj2jBh/dQ5KixiBTFKaQxROwHytsHYHFKpMQyqXOOZtg2lLdEIYg0xzChd\nBg7uvuvT/MK9d3PVVSt8+2u+jf2LyzzlqqtZrxuOnz3HaDTM+Zgtc+drCmhlmMb8edh9R8EUDD0M\nFoXprCGqJanDJ8et7/k9ZHqMYVXiyxVe8LXfzHXXv5goI7AtVen52J/8MeH0P7A6PceguDR0fFlE\nShHZB7wNeAZ5W38DcC9f5OQvEdHXv+7bKIuKGFqu2L/IN9z4Uu6/6y4+c9d9nF2dklK+UCaTFu89\nZZnJk5iIc53c0WRKinmGYasBg2M2qTm3eg5bOo5ecZC6rhkN84UtbI/CtjaPzO4T+B6FQZWkStPW\nCDLfvcfjlve/+5NwFJqH4YavOsLmhsn1oosudCMGYzvaokh3cjFP6HdqB+wUwxORjHJ1pxOAplyT\n+o8/s8b3PTSBVfijb1rgZ//DtYSOVNr/7yTgimJe4HQKTZNlVoui3DEuO5/O0WemAiIMrMlzKlHa\npqEYFAwHA5xYrHXE5JG2YfPMOcZVgdABFiYhajARjLW0waNSkkSwIrQhEEVp2kDEMBxUpBQQNYgY\nYoDSOVJsWFgY8u3f+3quu+4pVMNFJrOW02ubBGAaIoZEaS2JhogQokNMLkzaoiRGT0SIyRETOBU+\n8fGPcu7cOi/8xpcjMkR0TEqGaOG2W/+Yuz75ZyxGmPjzqBjuve/OxyRSXq6z3Az8har+hog4YAz8\nJHBWVd8iIj8G7FfVH+8mf70T+CqyMPiHgKdq3wDROcv33PSKDtJMjMqqq7orCcEU24gOavHe431O\n5I1TfFd4LIpcrfcxIkVB9InPP3iMlcMHSCmwf3kxQ7cdJSTFlDlV/akRsxpMDBGx+U1PKYdo3uew\nZTDIFPdnPKvml3/wDngD3PW7jm/4hsOPmqDPHWYneLAzad9hfU7T0052njrzMA6HauKGZ9f88S+f\nga+G2d8LN73iecSUcOW2swRNXTIfuw7I7Xkt/SahqrlOo8qgrIgoaoRC8npPnjhBVQ2gEKqyZFR0\ntQeBQpVzp05SNzWLi4sMDEgMgMm1IvqaUb7dv/dqClQstc+MCmPyBIAUt8EVQfHRUw1GFK7kO7/r\nu3jm9TdQVYts1TUb05rNukWNxadIVMUgiC3wSbFFAVpmhVBXUAcl+YSzBTE1BCp8zKIZXhP3fupv\nuPVj72NBtvCTgC3yBvXZO2//4lnHIrIMvFhVX999iAFYF5GbgJd0v3Yz8BHyqLz55C/gARHpJ399\nfOf/7UMU7wPBjWh8Rys3CRM8mrSrhwjGCoNu0Ezj63mI03dLMmuZtZGHHzpGUVgWFsYkPNPZFN9m\naLQsS8rhIDeFxYSKYk0OXcQqSXPCXDezLLgtuRe9bTNJ8aqjNdwPfAU88IDtLsBA7ueQORW+f21F\nUcwdpb//4jBLOqdIO/KbudxqviM3J6GcPl3APqABY6EaFoAjxDRP7qP3+TV1/z92DOOdTuu9pygK\nnMknZuy6Tl13ch9aWWE2ndHEmqbW3FnqHE6VshzgEwTjOLMxodLEodECZiBzKFxToqCbnGxyv1Ci\nxdiKQeEYFJa69ZCUjWYKyeC6acvWOnybQ+N3vvMdCMJrbnotz//KF3DNoSVqtZze3GJjs82nkWpX\ntbcYcSQy3E8QrCmQ0ZDgPdgKrT1lOaL1DQ/d+yk+/qHfZeBaorZZpZ/B/DR/LLucnOU64IyI/Cbw\nbHLD7Y9w6clfOx3jUSd/hRDmdJbWh6yCbwRni6wk2bYdDywrUXrf7WBdV2VhLSKamcJ1zbETqxSu\n4onXHsH7Gh9qxqMhoatmz2YztiYTmEwyIlaWjKoB3nc9MmVJ0zQU3ammmhAx8zBm5fBWdpYnwYMP\num6Hdo+g48D22PLt0yEnxP1J0g9JfSybJ/2AkAulz7rBZ63m58L9949JQUkEjCsQmIt2xJgnERdF\nQYB5ztWfKHNZKRFiG0Gg7Iipffg3GJbMNhuqgcthafAUA8uBA/t44MHPI1piNeCJPDTZZFzDwqgi\nBp95a6rYrj2gcHmAUN1OUOOQlJnQ0RpG4zH1tMV37RliLaFtaf2MwbDEOcMffegPec8fvIeXvvgl\nvPRl38Sh0QJH9h3m9Ll11gMkn2tC4HAyQJwQEhgpkOBJKbdzlKUyjVusnTrB5/7i/ex3NTHm2ZIm\nF8J4dOLRtl1OncUBzwN+RVWfB0zohq321o2UuFQ894jHnHPzOD2HDZJ5X0lJbcbqS2PzaIEUcM4A\nARcTJgRSO0NVefj4aU6cOM6oUq45uoKV3JBUDUYY4zBkSnthHeNqRGkKKltChOOnT7G2ucGknjGt\n645eY7uL3ZPUg0SQeMHJ8uCD5gJnUMi7XPfVN0a18zkmmcZjsKC2a87agZaZ/KVy4Rsl9PXOxDOv\nn84laO+9Z4EQtSswxiwO0W5z4nqH6EmVfcgqeaEYBCuGwmVnUB9JMWtrTScNaEFByWxj0s1fbPnO\n17yEN/3Q92XOWWwIGglJ0FgwC8KZ1SnTNuHF0lqIkjchHyJGhKooMSmLFYYUkKQUqhzaN2bfYsW+\nhZLKJcRFisqxubVFaDxhMqEqhFtu+Qg/9VM/wTtu/nXWzj7MvhE89ehhrjm8wrhapCjHYDJ44YyD\nmDBqqZziLGhaol1d546//APC1rlcvDWBylmGZYUxgm+bL+gIX8iOAcdU9W+7n38PeDNw8v9l8tft\nn7sbaywxJQ4uL3P44IGMUinY7kP33mPKrJo/nU7zUV04rBEms8TqyXOcPTvh0OGD7D8wzgTM1Gn7\npkTtPUal44k5VMwcSXPOcWD//vlFFWJgfX3WtSoPc9yOzC/8o0/w2VleB5//mLugTvJolJXeLn5c\n5pXKi+BimScr3YnSJewoSOKGZ8cMsbwa7r93gZR0zjjuw7Dc72PmrIQ+N1HtmAjd0/WOngCNmTzZ\nh5IikomXZcFkEjl3/jxf/VU38PCp87zjzT+JxszT05RybiKAOsQJtQ+EyYQBBaMFg3Uj1FQknxVU\nhkVJ6z2zusa4TPuZTTNdCYSqcAyqDBNbHWA00cxmSIhYV1EUBZ/5zKe549N3cuTIEX7wX/wQw6Ul\nrllZZGMaWN2yFIVl1ibUGMAgdkgbA9ONk3z8lj/l2P2fY2wz26Gyjslsi42tjb56dEm7nPksJ0Xk\nIRF5qqreQ57J8tnu6/V8kZO/nvSEq1ABK4aqGMzh3KZjAo9Go/kHJ12xq209i+MBoTVsbM5YXZuw\ncvggy/tGqDZo7HpRBDCWwhQ5edaYk3mX4/e+HlFgCKoYyegaQzentkynq4iY+YyXq442O3IWM4eV\nL77o+wv+wh+zYER/ABu7XYe52OHyd+3CAjMPjW64oc2qMz8Dn/uTwfz3+5pRURRI3HbgPm/qf3bO\n5VO7L7b6bd3jbZ3oHK5FTUQD433LTNbW+dhf3c4nby9InX4b9HmYyYiigKjNdJeQOzWfds0KR1cO\n8JG/u5PkqkwMjh5rEuNBSRPyZ+FEMr+s3yQ1c9KGwxEhRUJMtDExm04xhcNag4hy5swpfvZnf5ql\n5WVe/33/kv0HV9h3YIXN4DkXG7ZaIeoA1OBnU+7+9C0cu/tWBibgkqCiiCpLg4qFwQraUZROnj7F\nY9nl1ll+CPgtESnJl8wbyAHeFz35iyLHskkjNsb5BTwc5A7J6XTSCVq0tF2SXlUVPgU2t1rOnF3n\n0OErOHCgoqkD1gy6FtrIYFAQU6Sd1iyMxnjvcxEO33VVapfkCi7mMCGpElKcV/gXFhaJMWUuWbvB\nyrKHkxCPwkMPbUevFxcZH3mfohfBw9rBzX0+8cjCZZ/d5P9x6GDiSJngHNTXCMceGpNrMNt/F0K4\nYMxb/1gfivleG6BbQ9u2XVKd86ukfa9PIqoSrEJUTDEghJaNmcekiNEI2gMa21mYKiiGRMIMCg4v\nCy977gqv/95/zc0fuIOP/vWthGRoplMEGFSOEMmDl1JG74wt0dYTJTtS4Ry2KCkUTBHZ3NxATKJw\nY6AgCZw+d55feOsvsLx/kX/+nd/N0a94KkuHlzm9VrMxbVnbmvHg3Xfyib/8EAtmhpFIIUOMc7Rt\nDQi+mRE151uXsssdwPopMhR8sX3Rk782zq9ijGW0MCZKJLQto8EQSYFyXGWRhXrKwFrq2TRDitWQ\nY8dPsLq2yhVXrDBeKEgh0MsshKh53LXvk1XL5tYUVaX1MzAtkMmTIoKKm6NDueejAGdpfItv8/iB\nKIl/cv0Q83ngajh1zhF8hZLlenpEvK9hqCqoxXQXImq4MEoTNGV0ykgPDMh25b/LWzRrrJIK5dk3\n1Bck99Hn/5+fS0kxOwVGMXbbAYLPa/FtwvvQIXT96WMwkvMr6fKIHnIGUC/dWpU2JJrkGTvbjRG0\n9H6iPdtZI84AanA2MSzHaBRCvc7Tn3qUf/b9P89td53kbb/0a7QbM1of0BgZWpglRW2B94rtWRaq\npJjQkIGLpcIx2rdEHROaLG3r8TEjlYOBsLF2nv/2P3+N4WDMa17zOq699kkU05bzp9b4+B+/i3Hc\nwghYYwmphrbXgfBU5QAfzQWijY9mu6iif4i2bZnNZsSiILQeURgPhyQRxBqKwZDCCsPFZc6vbbJ6\n5hzrG5scPHiQajDsCJQBYyx1E+YJei9+0V+AGWo2hK6GIl1NQVWoqmHu/qtGtE2T5ws6R9Pmbsty\nMOTIkXPzEOz48cEc6oXtnKRHogCs2EelvVx8guy8fw49m+2fVUFj4Otf7OEvgRfAPXeN56dCHwaa\nDqItygLfBoIkrHVAvACI2AYcMpG1CX6OkqUEIaT570uHDGVxQksdhGntGZYuo09KV0tR0LxdRQyW\nSGWy8o0bjDl2Zp1pG4izDb7q+dfz0j/8Pe6549P8zJt/msmZNZCCqrBEn3AiRLP9nvavr1cb9W1L\nWVWghsIOsti491jJc0NnTUsInre//TdRVcpyyPmZpzIRxWcqTMohfVmWFxSHvxBsDLvIOo4+MKwG\nrBxaYTQazWPrWdMwbVrOb2wyaz0+JNY2JkzrhlOnT3P10Ws4sP8KhIIYoNVEEge2IF4U7G1PvUpI\n31gW84DVGBTfRupZm4XffMAqxDagIVLZgoErMao84WiYO8vDD1edlhnAjkr7DsfcWfi7mPvVh1+P\n9nOWKNrOYURAVHnVq5oMq7wabv2rffO6SY90QQcVhziXQQohzp2qH9vRc+FiiLQx4FMiKrQxdoAB\nncPLPCfrf7+0FcZYmhhpUiQZCKpEQ2aI9whcEpJYioUV1lngoU2hKkfEGjQEkgpPvPZJ/Pu3vIWn\nPf95bCl455ChzcNWd2wozjqMGAaDwfbJHQKWROWyOw/KAieGYVWyMB4iBpp2llvKm5olpyy4fLqr\n6rw2tzNU7je5SwE1sKuCFXnHmEwn2MKxvJwnWakKDx87zuLictYSroTzq2t4H3jyk5+G9y2Nz/36\ns7oGa+aQrUaP7V5v/+YCJMlNYrYosR0kJKoUVZX7vUsHMSLiCFEpymIOwc5mM648YjKccQTOn6vy\nJGHJRMzsiD25c9s5uejEUHJ9SJD5Bd4XVy9AzHY4l4jwvOdFjqx5WIPNZ1j+7kfHGJNbn1NSMIJv\nslMYZ0kh0M9h6eHr/sTy0RI1oBGME0Lq5tqoEoNkxTARQtgesJpZBUpslUjO7QZVyXQ6YzAY4JPH\nOTNH1wQlqOW+tZb9jeOqJ34F99x9Pz//X3+dex48QSwXaDYmpA7kaFUxMVC43HJgQsQaujbp3NOU\nUofAxdyflC/8iGjsWqxzU10TPM44lhdGhC401eDZ8go2t10XRUWMuSDd04Kyrpx0n+Nj2645S1VV\nmQCIMJvOcC5L8RRFwROuuZq2CdS15+SZNWIMXLFymFnTIiZXt6dNjSt6UqAiJBIQFLTfGaW7CJVc\n9jZ5ZvyotPNcI6ZAqJtMsoxZGMNFx+J4geADVgyr5wwcBu6Dg89MjKqKOijbYxu2k1wA42y3NiXE\nuE227HYw2Ea9EpplYFNXNNxBfbHWctO3tXka13fAx27ZR4iKSOjyBANiSV2ik0UcEm1XBa+7sMq3\nHkSIus04MEkzXcbl57NJ8pCmTknemm2OWgwJcQarubfd+4RxJXUb8DHiyGwMQyKEyGC4j1vvfIAP\n/tXfYK2hKPfnOTtYKmugGGTWoGpmaAhEFcRarDOEtu6cI3V0fwOdnJPpNqiMxXVF6u4zGFqb1Vus\nwWuCQpGiwFqytFXY3kBtBgCJMNfC5tIHy+62FWtXwCuKYj7Fq/EBNY4oho3JDFcWrFxxBVJ0feeS\nCXjOleQNfLsWAszrDH0OkXdo5m+Sc9lRQohzEMAaISVIyVAWI1A7p/QXRcH5s4NcOToGh1Zalg8c\nIGRq8vz06EMj2IaDd0LL2Wnz46ZTsN/JaO5zqJ3UGJHEt75ymp3ldfBnH1zGJyGooVUl9DWilKHg\nWdMybQMzH5m2nqYNtD6ixpIwORTtcqGUsop88BB8roSrV3xUokqH4BVoEqxx8zU65+Yb24te9CKs\nGAajRYpyQDEaUY0X+PzZNU5vBUK5Hy0PkMgtzVGFumnxKlA4cI6AEJLJp5ZY6gjleAlTjmiNwSIU\nxmK044J1MH+PMPbvVexQTWPIoiWatZ1NShQOhqVjaTRmUBQ46UIvI3kz60PgL+Asu9hWnF9gWZaI\nNfP7DBYfE+fXNlhc3g8CtnA5yBEh+oQ1FjR1iWneCXcWCfsaQl+c64/Xpm1xRlDyRZbZzA4lZQkj\nW9DDtdYaptMpzjlOnrTwKuAYPOVpm7zph49z9vxG97/BFUpRKEWRr4GihEG1gbUpP+bIjxWKc/n3\nnFPKsr/N9t/v+B0LGYBfg43rDZ+4bTmPvFOIMRDaQJ+oqSptyv0l/ankdyJxCtJBvmhmHDhXdfCv\nsm9hH2sb61hbETRiElhhXnjcuSn0G9Htt9/eMTG6kwGLK7Y3JpJBcLQ6xRUFrW8xMfcxKW2HfDla\n73HOZnIlwsxHTEqkttNlbj1FWTKPseN2rti/dhFBU5tFNMSCyc/SO4Ald9kKubzQHxOFc/gQujrY\nP9Ie/L5QFkKgKEqM5AE6SeH4qVPUdcv+5YN4jbkG0nOX6D6clGe9J01dXSKRhXcNpigy5h9jzj4l\nEZOncBYrgpAp9M45zq6usbQwwjmlaaa5eckIdRsR44gYzp4dZh7C3bDyVs/3cmLHCwE80Hbf/WP8\nfBm/85EJvJQd9ylQAD8Jt9xygM0Nj4hDNXbJuOAbjy0sGhMR0/XG5PeHntHbAx2a0SsxueWZHkRI\n8PD5M9kRrMWosDWbMR6NcquEMXPSaS5aKhhLPZthncufheRWBFWHdJQaQfAaEakITYKipG5bjEs5\nJHIVrjQU5SAr7UjCuIpkFJ8aRjODWMf6pOZKV6DJo6V24EZB1mdMFK4ETXRd5R0JWrO2s8niHM4q\nkiIxBgqx+JTAGETzfEqsEOI/4gS/txTzzEER4cTxUxw4eIjFxUVmsxmFyRrGOexKoDKnrIBgbImI\nouoRk+saqjnxFTWIM5RlQdNmiLPr60LFEtRyfmOT8cIiMSSMcXgMkiB0GL5FOHW24sG64ok/11w4\nxbe3Aii778UX//NH/jO89N9t/5wEQhAefKDi13/kMELeBSEnrzEAtqBtI9LR3nvqpunyth5g6MOM\nC0LFuM0AsF21PxccYVbXjEbj+d9sI3Qyh86dy3Uqay6sT0hH2e+dlZRDz0nbMh4v5EShFKrBgDAQ\nkiiuKBgMC6646mqe/7zn8pXPvYH/9H3/Cm09q9MZR5dGJBVsBLFZHw4pUAETIyqK2gpBcJmFR5Z8\nyuFWJlTmkxIUMZlRUEgupKbGU7ryktfprjlLn08oGSNHI2fOnKWqhrlHfpqLidInmOTxBr3o3Zx2\nnugapvIbFGNu5iJ1iVsI1G1NWQ5oGt8hUlkuCMlUcrF54Gt/32SyhbWGpBEJOUx50w89jZe+dJVq\nFBGxND6wtbWVq91b0Lbgg+A9oCVNo3ifL/a2Be/zYzEKTZsIweTHfd7RvDecPLHJze/bhyYhzBJX\nHjmCSXF+qqbkiWq794AuhOqmDyclae7zKJwjyXZo2jtNbzuha6R7/2JHag1Zl2vn3+WYfvtkkc4p\ne+KmdpoCvWO23almurDNdmFfuTAiVRWihmpQsrRvH9dcfw0vu/FlXHXVUSZ4ZlEJyTNYWs69LzHO\nUavSWEzKIym8JqwoYoWqKpnVM6JRTDRYU+LEEk2LJogph9kxbm8eOeQKiOkkkDSHfpeyXT1ZYoxY\nZwHDyZNnePKTn8za+mRePEsp4aTqwo7tZG4+ycs5vI+5KhvzZNt5sgY03ncbnFD7lhCBmKkuidyk\nhFiMG1C3uce8qRuG42WaZkpICWsz0rK2tsB73pPVXHJMbrnjzs8hO6q+/QVYFgMeAQmzgwdmHjn8\nVESoa4OdFcTgWRqPM5LV5V99v0hMSkydOkxeDYpiXYZatRs5aCRLsu5sQrugQp/3ky60zaPyeoqN\n7gAZ+vezL/b2f0vXReqco575rihq5iePAGJtDuESWa41eF77T7+bu+65k9d8z7fzhKuPMJnNOLR8\ngMFwiJ+sU4eAsY6zZ04xKkqElBN1Y0gmNyxgyBucxu4acewbDzjohFqFII5pSMRkCZIIRjNlJpnt\n/Ebj/DWpJtQqbawveb3u2nyWx/1J92zPLtP0/6WteM/2bM92sc6yZ3v2/5vtOcue7dll2uPuLCLy\nChG5S0TulawK8+V+vt8QkVMicseO+w6IyAdF5B4R+VPJUk/9Y2/u1naXiNz4JVzHE0TkwyLyWRH5\njIi8aRfXMhCRW0XkdhG5U0R+drfWsuP/WxG5TUT+cLfX8pjWIzSPxxe5KH0fcC25mnA78PQv83O+\nGHgucMeO+94C/Nvu9o8BP9fdvr5bU9Gt8T7AfInWcSXwnO72AnA38PTdWEv3/0fdd0cWGPm63VpL\n9xz/hqwi/Qe79Rl9oa/H+2R5AXCfqj6gWSrpd8jSSV82U9VbgNWL7r6JLN9E9/3V3e25jJOqPkD+\nIF7wJVrHSVW9vbu9BXyO3Hb9uK+lW8O0u1mSN7HV3VqLiFwNvJKsMtAjUbuylkvZ4+0sR4GHdvz8\nqDJJj4NdSsbp2I7f+7KsT0SuJZ92t+7WWkTEiMjt3XN+WFU/u1trAX4R+FG4QDViVz+jR7PH21n+\n0eHUms/2S63rS7pmEVkAfh/4YVW9gDzzeK5FVZOqPofMevt6EfmG3ViLiHwrcFpVb+MxeL+P92f0\nWPZ4O8vFMkkd8f1xt1MiciWAfBEyTl+siUhBdpR3qGqvhrMra+lNVdeB/wM8f5fW8jXATSLyD8Bv\nA98oIu/YpbVc2h6PxGhHEufIDbrXkmPlL3uC3z3vtTwywf+x7vaP88jksQSu69YqX6I1CPB24Bcv\nun831nII2NfdHgIfBV62G2u5aF0vAf5wt96XL7i+x+NJLnpDvoWMBN0HvPlxeL7fBo6TCfEPkWWc\nDpAFy+8B/rS/cLrf/4lubXcB3/wlXMfXkWPy24Hbuq9X7NJankXWi7kd+DTwo939j/taLlrXS9hG\nw3Z1LY/2tUd32bM9u0zbq+Dv2Z5dpu05y57t2WXanrPs2Z5dpu05y57t2WXanrPs2Z5dpu05y57t\n2WXanrPs2Z5dpu05y57t2WXa/wVjO2xV5Ti03wAAAABJRU5ErkJggg==\n",
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# randomly sample one ref\n",
"ref_ids = refer.getRefIds()\n",
"ref_id = ref_ids[np.random.randint(0, len(ref_ids))]\n",
"ref = refer.Refs[ref_id]\n",
"print 'ref_id [%s] (ann_id [%s])' % (ref_id, refer.refToAnn[ref_id]['id'])\n",
"# show the segmentation of the referred object\n",
"plt.figure()\n",
"refer.showRef(ref, seg_box='seg')\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. woman in front\n",
"2. lady smiling\n",
"3. woman\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXmwbfl13/X5DXs6853vm3t46m53a+62rUiOsJzBdrCd\nIgG7nOAUwYGKIYFQScoCEhNMEopUTKVIhYKCGAwBh5AASYwdWx5ky5Yta2hN3erhdfeb3313OPMe\nfxN/7HPvu+9JaquC5XZSb1Xduufs4bf3OXt9f2ut71rrd0QIgYfyUB7Kby3yrb6Bh/JQ/kWRh2B5\nKA/la5SHYHkoD+VrlIdgeSgP5WuUh2B5KA/la5SHYHkoD+VrlK8LWIQQ3yGEeEkI8aoQ4oe/Htd4\nKA/ld1rEb3eeRQihgJeB3w/cAj4JfH8I4Uu/rRd6KA/ld1i+Hpblm4ArIYSrIQQD/H3gD38drvNQ\nHsrvqHw9wHIOuHHq/c3VtofyUP6Flq8HWB7WzzyUfylFfx3GvAVcOPX+Aq11OREhxENAPZTftRJC\nEF9p+9cDLJ8C3iaEeAS4DXwf8P0PHvSP/psfZTQYEIeAyNbY7ics84JxHPHs5UcJk2uMD+8greCo\nDKxvn2UyPkQT2D13lsPxEZeeejvPv3yVp77hcfTwLCqLSImpqgmmmONDRGfYJ4RACIp/9k//Lz74\n/veTximdXo+yqfnrf/O/5S//R/8ulW2QUUQ+XeIJHN7ZI4s8wXr29+cMBiNkFDFYH5H0+vzVv/V3\n+OVf/CUMEiE1CLW6TjsPCCFOXp/eBhCkuG/78f/J0RGj9XWEECfHSi8Aj/QWXzc8lsUM4oBVEddL\nw0zHeN0htg6EAykIArwAKe85Dt4HpJAELEIEQhAo1Mn13Uo9pJQQAod7d9nY3Wnv+9S9txI47UC0\nnM5X/sz3jhEn/4+PO/3/tJzsQwCSg9u32DxzFghEkQLnUQiED/d9V1avvjOlwRuUipE+EJYz3j8c\n4jsdPjWfE9IeOorwp64tpEdK+NjPfORBVT2R33awhBCsEOLPAD8LKODvfiUm7Lv/zT/O1RdfYntr\niFcJPi/Z6W+hD/f50ivX6EvHbN7FLO6Se8FBOCQsSzLtyK/dZnN7m9oGpIRuNiDTGWVT4FVNHATd\nJMP4QKhqnLOkcYd+0kMESbksCD7gvATrsHmNL0uMM2SZJk4HJBsjimrB1sUdtrYcPlIoKYiERCv4\n8A9+H3/hT/8gf/6v/DVe+tLroCXBB3wAgm8fmmj/rG8fqA+eACcPSUhBcL5VRsQppWwVJQiwUhAH\n0KZhu5OxKaEWgmuFI1cxImgEFqc8CggEWvW+p3TGGJTUq833FNSfup4SAkJ7NqcUezXQ6eeLlOJk\nHLin9KeBcho4QhyDtr0G92FDQBCr+4bgA0Le+0b8AxON9x4lBSGsYogQQEoEoG0gSIEOHisEBI8L\nkm4Ma1rzxTzH6g4xEhscWkYIsfrOpMR596a6/fWwLIQQfgb4mTc7prh9k/3XrzCwF7Am4IJnTVo2\nMZx/5nGqWcFwrUSN3s5GbwOxu83RZ1/i5tXPUhvHbJlT7t/G2ppf/83fYJbXhCpHxpIgYoZZhBOB\nJIpZ664xXNtgbXOTwcY20loMAkwFBKraU0xz+h1PfbRk9I7HmB9dI1WeYjzHC41CsJgvqPKc9dEa\n9XhK6e7yoW9+mr/91/4SP/Tn/xNeu76HVymx8Fhr8c61iqclIQgIstUNAghBAJQUSL9SMDg1owJB\nIK3FNksudhPOqsBCSm5UnrlWhEgiCEgREFJiT41xrI8hBJRqYYQICCQEjRABIfzJMQJ1ArPj8090\n+gFrcLznQYtw73Bx3z6BumdphCAEf29GX1m3e9cK7bRxgvlw/5jB4YVACIikxHuP9Y4gQSKQtN+D\nwyOBgGArjhFKMC4sxD2c9HghcPjVdQRBqNUk8NXl6wKWr0UaI9jYvUjS76HriiztUE0WbG6vUSwm\ndKTGawjGM755i/W1PtYUdKRF0WCnS2iWXN45z97BPmfWhwwHW9RBsvv0c3z+C5/nXe9+ClkbKmep\nZgtkU/CJX/kI86MJQmuiWGPrnJu3rxEjqJ1gMp4weMoxWSy4fPkJ8rrGm4BC0xtpsqyLqWvSXhdC\nYH17i8xW/MTf+lFuHi75U3/2L7KsJVrHWB3jvIBQ31M2QK2UIYRw72GFQJZ1WtUIFrWabSPrONfP\nWBMWEyqu15Iy6uKFQAnwBORKscTKMpy4cCs3zPtWOY/f33OR7ldqKeWJhch63S9zH0+7UMfvv1Ke\n7vTxp/crqXDBtQBejdXem7zvXO/9yb0oKci6fYRvLZI8NaZ1HqkUQYQTl40gCQSEjKl0oDsvOTfq\nsxCanICOJEJKPL51TVULqSDA/xY6+5aB5UtvvMbd63dYf99zWKUxjWHQG1AsGpq6oZNqJosFfaUI\nVc6Vz/4miY1I45TRYI1gDFop8umUOAQi4RAukAqJXc6opreJykeQWUZ9MGaQSh4/twXW4C7uUjsL\nIuJD3/Nv0Cz2mVx5GWcN73j67RByHr34OEJFVM2SWVVRH+ZceeNVdntr7M8nfON7nuHaa1e5fOEC\ncdZlPJny+IVzfPT/+Ul+/tee54f/0l9GpX2iJMVYhbX23uzoPF6ACCBOzdJp1sUL1876xoBxXMp6\nrAmDkylXcss8kniZooVrwSJaMBACUqkvU+ZjoJxW9HsKfDruuF/Bu/3+fduP5avFYcfunpTyyyzL\nsUVyfuWenorZhJD33bP391RWeMB61vpDgm/9LodH0I4lhMQFh1CreNEFbLAIoRDOAoJOYxhGKS8t\nc2odEQuLRK+sbfsMghB4AuLNDctbB5Z3vuvtJO9+F8nOFuiYICRhvuTaq2/w2ouvs7OxQVksiQ7H\nSB0TRgP27hxybruLkjFp2qeTdTB1QV/EzMqc2ig2L17ECkUWQ7Gc00ET6Q5BWZI0IvhAvSxIg6eW\nAVtMmF2/Cc0MoWJuHl6lV+9jCk2sNJGGkfOYQUamOzzxzFM8JSu0jnn/+57DaonUGbujEdPxEcti\nhvYL/sZf/xE++qu/yT/9Zz9P0h2gIoU1DUq1D0f5cBLDIAJCqnam8+3MG2zDuV5CVxlCiLm6rDjI\nUqTwaGlpHTDPsW49OOsDWNugVHSPWAiBEO7N2qeV47Tyn7Yw7XwrTxT+RKmPXb7jv9V5x/+PQboa\nfTU2qxjhlHUL9/afDvbvxUYBjycQkEq3AFpNMu7kM69iKCnwQSKCb10yD5uRRiYZ1yZTRJYSVu6v\n8KENKJFIEXAEIvHmmZS3DCxKGExTkr96SNrtURuPTjtcfPwRojhilEZI5elYw95syvnLT1C8rURp\n186iSYo1DZ/75V/h4tlHuDY9YHl0SPXZz3HhbY8xf+MqP/9Tv0KcrfNd3/e9TI9uY0xDVRQMki7D\nTsrmZkaIFC+8eo0zHcfGKKXOS7pxH5tZ8nqBbQTOebZ3ziN6KeNixqa2+Njj3JJ8WZF0hxzM53Ti\nBBEsh+Pb/IHv+G7e+43v5j/9kQ/z0Y/+Kv/5j/5VUBqBwIqAjxQhSMTKhQohgHRgNaquOZtmnAsN\nQmdcWVbMUk3sKohThPArl0EiBPcp5pdbAsfpiPpYCZ2zD1gAt1L2Y0U+DaZwXxB++jrHoFIrq3a8\n7T7LgsP7+8990CV8ECjHhIhQ9wgB7z0BgVzFLMdWSK6YXk9AyJYkCFIirOHMaJ1xbTFKo4JqGcDj\nuMsfjytQCjh1j19J3jKwxFFGXizQUUzwFmkN11++zlPPPsf61giWS5pqSRwCnViwPLqFNobJnbt0\n1oakvR7G1Eiv2Vrr8shjv5dOt8dsfMi1Gze4vnSMZ5a9l17hz/3wO5CXn8TJBL9K8Xz2s88z7Ebs\nXb1JnA3IE8vdw7vQGK5PZlhTo3RGvqx45pl389rNG4S64NFzl6gXR3hbYawhyzIQgs3BCCdaJVvM\nxiQakBaWB3zre57gvX/vx/lf/+E/4Sf/4T8iTrpUIbQWRkowlki1imuait1Ys6sUnVhzJa+4qxVK\nx3QaQSllqyyyJQ2890RRhHPuy4By4mqsLMKxgjrnTvZ9Nar7/lgk0KrK/Qr/4LkPul6nXz8Y9xwD\n/PQYXx4DtWTAaTB6/+Y0NLSWOzjHsDZ044RPj/chSlbHr/afEA4tuKSFr5xduSdvGVgO5wuSOGE2\nmZD2MkaDDd7W2aFc1uzdvs5Or0c3SbDGoyKJMQ1NUzPa3qZYLHE0VHnBoNsnBEcWaW7f2OO1l7/A\n5OAIQgoyojQG5wwOh0odSktKa3jync+gJ1fpRYG1Z56CYJGhwmjL2tknuPXC5xmmCahAXdecvfRe\nnPscn/z0Fyiqgjo/oPENiUxRUcKZ7R2cdWyeP0cajUh0j8V4zMbWBo2zbMXw/d/z+/jub3sfP/Jf\n/m3uHB1Q4dEiIIOlq2Ctn2KkYcN6srTDS2XBzUiShgyBpE5aqyBO3PpVLODcKdfqFAu1cjl8CMj7\n3Jt77sbp16cV9ctm/lPHfKUA/ivFMMf/nXMnr78agxZCaGlhpe67jwcp6NZS3r/vvs8gQLjW3dxN\nU2ItORQeIdXJcQ+CWwjRUvy/RfHJWwaWNO7yiU9+mue++X0sqxozMyyWc+gq6iC5Op6QaQUIjLX4\n4FkeTekljl5nncPZId31EaSC/vYZXrt6m8nRgspEjBcNprEs8ilxoinKJVlXIW3UmnGtSbOE6tYc\n7y1GO0Y0RFmHSbnALg/odhOWiwmYKd1YEK7NuNAXlLEgynbIom16lx7jJ/+3n+N7v/sPEvd7WBEh\nVZcP2sCnP/lJHnnkIq+98hK3JvuoIEijmE6S8eE/+yf5/Ks3+Omf+qdc2Fljs6uIfMXeeMHycMkw\ny3ijLrgRoKs0QXiCCsggUCIgFLBSmnDMsnqHkR0CAeUtAgMhWrlS4oSFFUIiaRXDr2KCAEjUKddG\nIKXi2E9pCQRxj0IWAedWwbwUBH86R/RAkP4gsbCyKkq1VDXyFH0txYrJunfusXjvcTRIDc6vPlPg\nAWBppHdUsaZb1pwdZhyWNTaO6QSBl2BDgOAR8n7r9uBYX0neMrDY5QGPnt0klEvObO4SeQW7miYE\nyt4CLUE5i1cOHwJFWVFUJReffZpuZx2RxmgBu2XFlc+9wAvPv8wyzynKAhUEd27fINbtTFQUOVHc\nIbiSaNAjUxFBwSJoMp3grWBSLRjEPYxJCaFLM/8CSVkw6scsF0csa02UDsjQmNmUkpq9wyO6Q4md\nj3HLOSaOmU9Kyukhb3/iUSItWU/WufiOp3j9iy+gpUIS6KaaP/Kd38qf+Ne+nb/6V/4z0kSwmE35\nhndepLz0Nj7zic+QjYasLyoKdEuXqjZH44M4sRhBtEk7GQJe+DZ/EVYgCIpVYv1e7mSlrDZ4pJDt\n+1VyMZxS0GNb8uVxRHuMd/czbM579MoinLYexwA5vc17j9b6AQLgfnnQWpymwqUE674869/esQOh\n0MEzJNBLe3zyxk3seo9wusrgK14n3Jd8/UryFlqWwJ0br/PopSdQFkIqaQSkMkKGHqJpiFTEfHpA\nkILN9RHj8QAmE5pCkPUG1KZE64hO1KObZWSdiLrpMjkYM51PiOIMFdV85Bc+wge+5ZvZWN9A6R4o\nyXKxZH37LNbU/NLPfpRzw5j9518gtyVrwzPs336dSxfPsO0zlN7m4pkBtTVEWnO4P6fbXycJgWAm\nOCGpFjlFOWbjwnl66QWqfIkKEXVZMgqBRMWsDYYsJmO2N9bIm4Kjo1v8Oz/w/bx++3Xe8U1v57Of\ne56f/bnnCaMBa90N3ntGcPXOGCcDk8Uc5xzWCJTWICTGOtI0wZqaWEiENIDEhZbB8iu6WimFVOCC\nRazKWQCkUPfcuCgC7im49/bEtWtLYCQBd5/bdazwx67Tgy7Z6eAfODn2tOU5ltM5oWNW7TTYjqU9\n9zgHcxzHtHFNEG2wHgXLjgzkrmaiU2KpV+UUX553srb9nN455O9WNmyeLzhz4TymAWsLks4anSzF\nGYtUjiiVzI4OGHZ6uEhyZ++QqjFU1QyV9ShdQVCC53/t48iQkUYJXkmMNVhX0+mklI2nyEv++7/z\n3/ET//P/CFrgLYDHNYb//X/4r1nfPUuWwcVLZ3nX+5+lKAuaIjA5M2Jj2CdOEhZlwTTt8tqVK7zz\n2Xdz5vIO1WLChlS8zfV5+QtXefXGdbqjLr29uwQVs55lOFNz8fwZzDwn7WZ459k5ew7rDcVygZIa\nb+bs7uzw4pUbfPzXX+WDz30A+U1dbBAIUYOVdPsDClODh2XZEMUxSkh0rJgvF6ytbWDqCidh7+6Y\no+mCg+mc6eyIw70D8nxBWdfUpiaSAhU8hVCE0AJFRxGmrAhS0Ov1iOKYgCR4sKYBFEqHFaEgVwAL\n91gt38ZFLQt8T7l9CCsi4sHyl7aUx3mPEvKEhm4th7wH4NVrVvv8KZYuBO6zWiF4pBcEJMp7doZ9\nDkxF6PaIrCVE98As1T1qXEqJ8x6pxKrM5qvLWwaW/vZjdEwgG6QU+YLleA/rLB7FG9du8sTbHqOJ\nJUXTsJguiIRkZ3sX42fMZkvqcs71N24SOUEIBc7VeCeYT3LW13a4ru/gmooQGtbXBkglWBQ1OtL0\nez2+4cknCY3jjdde4fFHdlgfjqimC7xQ4KaM+oosFczmB8ynM7JijfHtI8LjSw5uXKc76DObzHik\n1+OFa/t88Lm30x/0mc/niNEuzXzK5to5Xvj8Z3jHc88yGI0ItSO3HivAS8mnn79CuZwQRQKjIp54\n4j3IpIPwEZvDET7ULCZzqrwgVhpT1iQKIhkYjfooPN0sZm9/Qk8LBpGELNCLYs7vbtGX5wlNTjcV\njCdT3v97P8BWFphVnp/62Cf5w9/17Rzu73Nnf8wHvutPEvd7bYbdWP7Oj/1X2EXOYn7AtJiT5w3z\nomZR1izLksoGbNO01keqVY1WBEpinQffsnHWWbQStIWmkiAMQji0j1FO4IVv6XMlwAWMkK2LKFVL\nHYtTJTxeEPwqSJcC5QLCOYIQiCAQEipt6BU1Ktrmjf0lZVKQZjGaQMC34F1VPQR9HPADWlCFt6A2\n7GsRn0+IlKae1pSLBd1eD5MvyYbbFPM5KjgGnZRoKFjvBILWBNtwtO9ZljOuvPw6RWnJq5punOLq\nCh881jiu7+8zHAyR2qMv7RACKK3QWYo1hllegKy58urLXHzqbdy4dYPNnbN0N9fp6C5mMUBWCw7u\n3kXKhO2zFzkYTxFCMJ4uSGNFnERsjYY0VcPWxgiNZXp0B6ElV195kacvP8Jk/xaXzu1QzA5RKiLP\nS+IoozIN1ngund/hiXd+iMZD00i6SYcvvfwq83nN0jqMc7xy6zqDbq/N+rs2U++9Z7ZY0O90OJzN\n6Q3XmcyOcN0UFXcQIdDNEqTVyExgfcF3fvu3US+PsC7i05//EhfPXuQzv/RraFVgpCSSCuscUaoQ\nGgY0LKsJW1qQpQlxv0tlHI3z2ODbWElK+oMeTYB5vmQyz5kvCmbeUiwq8jzHW4MmAhFQRDRK4JF4\nGbBCILwCeRxTgbMGoVtiQgRO8j5uRTwI5Ande1IBt7IYtYLUGs4mPUxd44MlTeLW9WTlAhLw3qFX\n2/yqTu0YQG8mbxlYsijDBwuuJoo0WMvW1iaVlxTzGcE7mqLBKIetGrQLKATTgyV37+7j6opBmlEa\nRxLFREJSNhWR1iRqHeccgyrFb3tqU9E0htJUxIniT/+x76WfKRIVISLJxuXHELZhtncL13iM7HD1\nhS8h8XSGfbyYs33mLHcPPstjecUyr9mfFdiqZHNji8HOWVxT44zmaDxl8/zjFM7TGQzwzmCRxElG\n6jVNVbO+vgXCUS6PmB3s01vfJBaC5XKOC4asmxCCJM9rOv0eadYhUopYRzjXMknOO5CK7Z0z3B3P\n2Fpbp2gKTFXhveBwf4J3kEbQiR2ff+FlepmkbDyPXH6S2aSkbBS9bMB0WSBVANUmMbUSJNpT4DCu\nJoo8UhhsU6B1Rhq1SgYW6jm+MfSQhEiys7sBVqB2W5rOh4AY9qgbg7OBW7dv0zSGu0dHWB+ovUer\nCIFA64go0RjTtLkj6wktklowraocWqpXEGxLUAjZ1jNEAbrNgs2NbWaLCh9FLaHnPDKOILTw8sLj\nkVi3isuOUfK7Nc/yuVdeY304pKhKhNAMBwPuLA9YX9uk0+uioqilJL1EOItCc/3KVWbLBbauEF5Q\nlTUWqG2NNxZ1XG4RxwQJcZxR1zX9rEcAvu27voOOWVLd+hJmPMX1hmS9IUVhKYqGslpw4ZFHsJnC\nSc0z73yaepkTRZoQS/7IH/0e5vMpiRoQb26jlebW9Vts7ZxFK48LgjMSdG+b2dEhe3duonE0RUF+\n+5D+YAMpPEevv9H620pxuH+bTpKyce4Sk1lOFsfUhcVaw6CToRF0soymrlvLotp6MGMNKhY0TYP1\nnhDFYA1pJybWEWmaIozHVhWDXozWHTZ2d9h57FHKcsFFH6jqCxhT8ba1HdAagUMKjcdzsMwprMUf\nxybWIrKUygmyRFNXFm8trjQ0RU1AYAGlNUJ5nBAsFgsGgwEds0RaT10b3rY1IktSxOVHqa1D9boY\nbzHGUFUVh4VjPB6zWCyosNiVigoh2wptWkbwuIUhSIiFRwbPIxtDhrUiVZI9W0MCggihI8Kq1sYT\nUEERQsvgsert8cH+7k1Kbu6cZXP7LLfHE558+gkQGiEDy4MxH/zOP4BHUdc10hviTp+f/r//X7zx\nNAFwYJqARLNoaua2QDkHzpIlCVEnQoWAC4InnnqMXidFSUs+OUREHhclaNmj2+/hXEDr1g3Z2D1H\nXVVkSU2aKta2t5hWFa988Qu8/e1vZ7q4y7DXZbyYEdvAUsD+4QGPn13D2gJkQp7X6LxhLUugFxHH\nXSZqwp3JjEeSDi4TZGfOImSKrXPibqAT9aiOcl597VVeefkKxdJSVTVZR+GcYDFdkCQaS41AUFno\nZwnBObK0w9AajIc06dA0OU5YlPYIYUl1j26nS9KV9BJNPbmFF5JbNw6xxZzRqMv+nQPWHn0WITWS\ngHOB/mCdYrYg6yTUdUXwrUuUaEVZVgQbmI3nDLIOEs1wbUSDpzEGW7cA1lqjhGgtRBCtBVEQhMeH\nBoKlmi5wPhAnCVmqWEtiwuhsa1kkGANRlHI0XzKdT7mzv0dhArWPiKUmTQWP7axxeWuLiAX7LxeY\nxrK+MWCYRlzPc5BtiVAQoQWIF1RNTnCeyAukkqRdTZKlfOFNdPYtA0txdIv4kQu4o5LQ1ARpqJ2h\nsz7EWk8cx+gkwnvP6y+/xva5C7zx+nX6SrTJyxAg1KhlQXfQoTYNxBF5XZIYR1CSf//P/HsU80mb\nJe/EfOSnf4FOgKaa0ul3sXmOs4HD6YJIaNJbR6xvjOh1dzmY55STGcMzW7y3943M946IUAgkwcF0\nPGGwtc2dOzdYPnOZ/mATLxQ6rukTUec5nThjMZ8wTLtcOqMYpQkT3aBlS52LKONg7y7LZp9hb8DT\njz7K1Vde4+xjl6hrQxx5XOnw3lIWS6w3BB2TJRlVXZHEiv39PbLeoGXxyiX5cklelURKsbExoMzn\n7KoR67vnGO2eRagGZxXVoOFuOcUhmC1KcKGtclAKhSDSEmMswTR4LZBaUjc1UQShcQQCm9tbmLoh\n6aSUtmmZsBVVPOitqpZpraCUEZHW5GWOx1PXNb1eD+EkkYBYKFzjiJQiiiJMY8hUYC1VVM2YC33J\nbpzyts3LxJ0u82VJ0okoJxNiDJRH3JrNeO/7nuXm7bsUd/boKMOH3nEZIWO2zuzSNA2maRgMRsxn\nB+AstmpQAtJ+S2n/kzfR2bcMLBfOXeTWa2+gEBzd2iMbDVCJYjI+YtAZUVY5Ko7Yu3abm9f2yOIO\nKgjiJqcfeWQU4YIjFhLckq7ylPmcfpqxMeqQ9hIOXvw4EWCUIjSW915aI3hPlSdYBJUTrJ05y9vX\nN7FCsbE+oqpyur1tnnruWYKt8fkCffEMSb/H5379NxiMY155+WU6StNZ32c2m/ETf/fHQUVsbp0l\n7WX0Oz36Wcqw3+PuwR6XLz7Ordv7nH9qC60UEsHd228gvWQ6rtne7rCsp8xmc4bDHrYuiZVGikAc\nK4SOQTR4rynzhkkxYbQxxHvD2tqQKEmwQaI3+5jaMpsXeCcY9DS9zFKVEy5evkjodigWOb2sy/aj\nl9i8sIESmsFZTxCc9Ji44GiahkgrbG0JSlA7i1aKCEmadalMTV6WeCmIpSa4gKsNiY4QOsbWzXF0\njiNgTE2k2nyIB6TW1MbgG0OkNIvZnLW1NQyWOI6oqhIrY2ZFQ7ANg25G2s1oGke1nBD5wHJZorwj\nSTJUFLN7dp1r+weIOOLSpQtUTcGmioi6KdX4LsE7NFDVOToYIi1pfM3GaIRKIvK6elOdfcvAcrdq\ncLnjc1/8Er/nA89iy4ru2bMUuSUKS7T3XHl5n73DQ3rdAYt8SW99k2K/LRvPog4egYoaZIB6dsj5\nc+to7Ym7EelahyyKsXlBJ4uYm4pupGhsQzJI6GU95ospMGe6P2G0vsX8cIKUjtn0LkmUUNU1/Sxm\ndphjyfjCC1/kD77zOX7Ps+/BhTZPsLu9zqDfZTGbMBps0vRSGlLOnD+DTBRb4wVRkrCbJOxLONib\n8uKLn2K8d4NB3AGpiDoKLWEw2KReVvSHGUQpUmt00qBDIPguTktyc0gnjjG2YpD1CNbRiWKEEJQh\nQrDgFz72OfLK8wN/9AN0lcaKLuW8pKoaks6Ig8Mpvlqyf+M6xlT0t8+wdvm9ELWMlJStUuMDXghM\nY4iDQ2mNsRVWtPQr3pHGMVVeYZqGEAJJmhKcJShFXRtUFLXNbRLypiJOYtwKeMY7pBBY51CRZjqf\ntfuNa6lnJELEBKVpaqiFoW4aOonG1BU7wxFaRnjrV52ObrXWgGtdyc6A2juWkwlprIi0XLUge0pn\n6aUDUJJxlePnFhW/ORzeugx+1GF0qce5i2dZmoJ6ssTu7ZHtnMFWBddef4Nbe21OZRqPyfMc5z1G\nRzgfUMbuJlfPAAAgAElEQVRQ5jXBOhq/4Myoi17boCgLXr9zyMXuFjeOclIdoZJ16O0QdTWJihkf\nHtJkGcGk9Dt9yskh9bhhe2uNolwgopQQJEkcY40llpp8POHJxy+x2UmI17o0eIrJjK7WqNAwGnYx\nZU4UeT7zpZfYWuuzOFiQqA7GNmwO+kQqYXDpEdbWhlx+4o/z8vOf5dKjj9KE1r8PZcMLX3yRu0cT\nmmpJ2TTk+ZxgPd57auHpyQjvHEppJotDVKTRVU3TVCBS1pRhUVU0LuEoL9hIEjwwLxpG6Rp/4Yd/\nlB/7sf+CN75wlY31NQ4O9k6ax46z4AKwxtCUVdskFwu8dyg0cZIipGQ8nmCtQ9SObqeL6g+oqgrr\nHDJ4ghdIpds+fxfIi4IkSeCBeiwhOEk+ZlnW0sTOoZWmqWtk1KqocR6pFZ1OhvBtZYK3hsLUaB1j\nTEOSKaSApqnROmorumXLyFnnMLZCyTa4j6OEfLFE6rYKoLc2wPxuDfB91VD5I2QSMUg6qK11xkXJ\nqNPn7//Df8zFrQ20hUs7u+R5zu5gjbpuSPsjrHU4PItlyY3ZPj/4x34AWyxwFpx1eAkHBxM2NhKE\nF8waSdbf4tBVJCImjDTXjg7o9tY5zHO++OJLDDe2eOnaKyyKkpCMiIVDOUu/kzIa9AlNw3I+w+yc\nZzabkaQJ2WhI5AWT6ZjOcI04gsp5fF6xGE9RUVvL5BtDEqcILGWeM8o008M9NtYyJpMDBqM1JJ66\nyIlVYNhLMCbQy2Kmt+/y2PmLbSY7i2lMgykbvLEUyym+LKnGM6QS6K4iX4xxvmaaLymnS+40R6xt\ndDmcjnnjxg0+8MEP8vM/94s8eqbP0WJCEIKDoxmX5apQEvDB44NHaY0SrdvkaBN/zjqqoiCOO2Tp\nvUx4WZarui+F8J6qbpAqwliHJqBEaJvfhEZHEdZYrLWkOjqpBCjL8iQjb4yh0+m0C1BoTZ7n6Eiv\nikEFURShlCLSCSBREqQKJElCXdd45xARJ+MgA1p2SLXEVEvqxtAfjcjrEikFyyrH8rs0KZkmGc1y\nweZgA9ssSJMeNBU/849/Gu8EwQQSVvWySuC8IcsSrCtIkpi8avj9H/oAL7/xOnp5F2Esnd46y8WM\nYlnx2IUtTJMTBQ/zMVd/83nUcJsLZ3YQ1ZxRkaNm+0z2DvjGc5cY1yX94Rl8TzF6/Ek+9Zlf5/0f\n+BasrymmcwabW5x/17vJGoe2NaiIyY1bvPTiFzn/6CU+/qnPsbY2Ypj0mWN44+4tup0OV69+gefe\n+xRIjSkNcZKAcfjCkA23EHkO1oJsV1hR1iDLilRKrINqOQWzjdaSoiyJVUQv0Rhr6aQRla2pDsfs\nnjvXJgxtjaiX1Is5ZnpAFGm0k9T5nM21IR//tV/nT/1bfwIR5ty+OmYwWke4iOACwXuCkggi8sXy\nVBVuuz6Bo41ltFRYYxFRhDGmzYwL0TJhxiCCxZhA0omIoogotNS/lBIlJd46XN2Oo+Sqr8a1VK5U\nihDAmBLrHN4FrLUkSYLznsbUnN/dJYkVwXmqqqYsGrwXmKYkTVN6vR5Samzj2djYAjzGgwgO0yzx\npkKJhPls1pa4aIlSDlvXb6qzbxlYXnnjJge3r9N8qiKJJDjwKG6OS3r9IbUzJFjmh1OCEejNHY7K\nOVksefLpx1Bpggk5V19/kUfPfCOzfM5Gr0+qErLNPvu3boEM7Oxs0jQlTz12kV/61EuMUk8mLJud\nDOsDywX0RzFJSNEeHIoIy97NW9iqpJrPWO+tEaoFTZ2zqEpM3dBUjqST4TQM+inf8s7LBCnBBZ5+\n4luYTffppZrdx3bwoeaFl77EU7sbeDLiqEfWUSjt0YmimB1yODnk059/mWJ8SB0USdQlkUAcM3OG\nXqdLFmckGHxd05gly8UctCAZDMllQEpNaWpclfO+b3wvaZpSFHM+/7HPsr29ydPvegfnHn+G+XJO\nx+zT7a1x5cUXuPjUuwjWE2UKfDjpgWlkoLEObz3RqmBRa43SbbY/CIcLFg9opdt1BZSiKgw2eOp8\nSbfbJULiBTQEpA/UeUmSRjgCxC37VZYlxhi0tSglCdSUpUGrrE3K6gjhGgSCfL5gaS21r4l0TJJ0\nscYz6PZQAdQKlHNbsFge4H1bVBq8I40jIMZ6z9bWFtZarHVtglyZN9XZ3/ZV9L8WEUKE2eQWxXSP\nfndALCUf/+gnePHVPZJOQhIaQrMEU5ElMd5JLBH9s2f4lm96F0eTCVrC+ctPMb57l0wJptWCSTnn\nnT/zh37HP89D+e2Vv/jqH8LYEuc8IijiJCGJY4RorYxzDi0VcRoRxylp2sE0BudrIiSmqojjiCxJ\nUVHEclFQ1RVCCZIkOSnATNOUpmnaKmQZiFL4D//mPyD8Dq5I+TWJVJrhYB1f1Vx59XXiJGF9a4um\nzpFFQyftIOKILIlYzpc8/uRlbJaim5JhLEnTlNmN18nihOnNfbrdDlmn91Z9nIfy2yghBOIkbpOG\nqm1HIDjyZUGSJiv6OVAWjqpaAhBFCq3itkcnjul0MqSUqxVnBEmaIGQLlihq3cM4jimKonXz0hih\n7Zve11sGFmzLy3/x+S/QI2aoNSpUaFfjm4rSGKTQBFvxjqefYVo7bFXhfFsAYauCfprhmoZelOAa\ni4oe/pDZvwxS1zU0jiRO8NYhRFuunyUR1jr6gz5WQmQjgnQoJfDBYWqDbQxaSJZAbkpGwxFxvwfO\nIlfEANzrx5FKUswLnKmoQ/mm9/WWgeWlF+9w99p1rFDcDoa6rlkaSOIB3c0+iah59j3PkcYBtMLt\nj3nplde40Y8QjaPTiTkYH7Uzj40Y9juE5n6Xcvqvf4wyX1CVJXGSoGQgXxb0+30a5xmtDxAbj3Dl\ni59lZyDJ4ogkijGNZbacMByOqIqKTpaR1469gwPOnztDkmVtwtIaNBaCpshrumsDci95/ROf4NLF\n8+3iby7gvKUxhtJbtjY2KPKG3affQ24dSjjy6ZJgKt549TVsXjBeTMmLiqWtmS8qiuUMV+Zsrg9J\n+wNsU+GKgnE5IUpTetkI4QRR1tZd/b1f+BzPPfUYZwYxcTfh47/xK/zQn/iTbO6scf7tzzK+dY3l\n7RfZztaQSeC1seUbPvg9dHoZzrU1dn/tz/0Qdw/3UUrjrUX6dkUYKSXClkgBIk6ZlCVarspaPPjg\niHSE0m1GfLFYkGUZuu2QIVK6zeWodi21SLcq+De/4d4CplmWYUygNg4rAgMkBItIItxqTQShJcaZ\ndpEkp7He0LOBOE2QKqF2lsgLMhnhTIMHlqfYNmsdg8EaSaLZ3t5htriLInpTnf0twSKE+HHgXwX2\nQwjvWG1bB/4P4BJwFfjeEMJ0te8/Bv5twAH/QQjh577SuN5WxFpRTA+xRYkXsBYn6F4XW+d86wfe\nhxIWp2J8gEceO8+FS2fbcj1rCXgQns7uefZvHjKZT5iM9+8Hi/MUZY0pG+5eu8l4WZFqjSAw6Pfg\numR4wXH71iHNQtJJNL1hjzTJEHpAU1g6MmI+XrK+PeSzn3mNyxd2aZZT4k5K7R1Bg6wNOhgObl0j\nXtshL5d0+j1saHBVzZoacjg+QqGY3dhjNBgibIGZTyirnETF3Lp2DbucsH80Zjqdkpcl1jrSJEJl\nMRMvub3wRMt96qpACUVlDKq0TI5yuklK3AMdRUitUGnK2vY6V/dv8IFv/VZGmxssbcHdq29QFjN8\nHDO1hqaaY9RwVVAYCKtKYecd/W6fxXJBFEXYpi1HEk6ghSBWirKqiVXSekkhgPDEUXyybphzri1b\n0ppUa0xTYhpDFEX3uiy1vm+JJYDpdIqUGi9kG2OkKcI3SCXJdARSMJ5O6K4N23tWAmcFbrUELbbC\nu3adsUWe0zQNw+GQfr+P1hrrDKbxKNUuieStp5YRXrx5gP+1WJb/CfjbwP9yatuHgY+EEP6GaH8z\n8sPAh4UQT9Oumv807Q8Y/bwQ4okQwpf1kcaRRAbPoNfDhYATbUJsfZjx9JNPo5IaQUDHKa6qiToS\nV+Q4A1mWMBsfoGJBiAXDQUIkMkYb5+AT964xCJb1rRFV0/D4Y+e54Xo8emaLCEM5X+CVJu4M2dgY\n0e8NeelLL7B/1BCpChFlDGUN9YLDSYHc38PpDv/4F3+VYSfD1QXGBurGMZ9NObs5QscRejBDxAPu\nTHPiXkaaRkxLi9EJoTRESY+b+wd8wxMeVVY0dcVRPqYucsZ39/DBc2Znk/HBQbuwhDd40T6m+TIn\niQSamMZ51gdnqOqcxXSG9zmiqlAi4m2PXsSWM8plwhef/zzf8e2/DwjoRc6R2ScKDeP9GyyPCrbO\n9Il2tzDGkNH6+VKAacxJy21d1+goIhZtQOyQFNYRpxneCoRqH+9xsAwwmUyIoqili1etxMeLUjRN\nAwLiqI0Z0k52n26MRiOUivBIDIE6VoRlTT+O8LRtwVmaIqxHCkEnTfAqQoXWAjVFSZalCKVPcjLH\nidCqqqjqEikilsvlKjaCibesjZL/f2AJIXxMCPHIA5u/B/hXVq9/AvgoLWD+MPCTof15vKtCiCu0\nP5v3Gw+Ou7Yx5PDWLYIzDEZdBusjtna3QXqm+1eJLmyRdtcRUhOEw5qCNIooTUNVLkmSlMg7RD0j\n1CWp9BCp+67RU4HS5HSkxOYHqOGIuKuZ7O2RxgK8p5jucWZnhDWO9zx1iTuHR7z68ks8897niMZT\nOqljq9NlvL/Hk49tUhZ9pEq4c1Cjtebc5acYbY0w9QyddXEigtCWjMtE4/KS0nvSoqFezIkjRTeJ\n+PwnP4W3JUJL8rLEGUPSyxj1uxzd2ScRkryYUZRLbt89YLGs6fT6RL0M6yHrD7l58zbGBXrdGB3B\nsJu1fTV+Riw1t+/u8aHf+yHqvGJ+cMj5J8+hZoKNZy7w0X/wIlu7Zwh+xjBLkMSrxf4cnjaPY52l\naQxCQGPMSc+JtQ1xmlKVDU1Zk/XSkx76xhjiKGI4HLZtw6u24rppMHVNlqToqF2Ew3uHSmPcA+28\nUiqcc23JvBBtBbbShBBorMGHgNYaSVuKE/UU1oFKMqpygSOQlwVy9csBUkjyvMB7R1VVJGmCM6a1\nwlJRNxXb6xt04jfHwj9vzLITQri7en0X2Fm9Psv9wPiqP5HXW+9z6fFLpEqys7NGlCTMFgvmeUG8\nfoHDWcXs1k2sF2TCcef6bW4fHLHTVRxOJ2S9jFA0vOO5d9OYCuc9nej+maEQcTtrKYeyCb0kI3hJ\n2u0xOTxgY+MMWjuaylI3NUIpLlx+kuHWGa68/BLn+xl50TDsd7l+OGN7d5emXtCPJZvpFBMCXmwT\nQkTa67LIC1gucLJkeuuAsxtnaHyAXsre66/TS2LyqiJNM7xZtnkLFbG1uUZVLDj71EW++IWXEbXl\n7q1b5GFBqjsEqUkHPa5cvclg0OHc+R2m4yNEEFw4dxbbFOSzKb3dXQ7u3GGjb1DdPnFIuHP9FsPH\ntnj1xVe4+Nyz/PLP/p9c+7Was5HlytV9BmuOaPQYF0REAJxoOwYbawlBEMVJ2yBlDbWxqDhCBkWx\nzInjmKTT/s5JU9cnPfRytYCFM611ElKik4RIx4Cjk8ZtBXBtQMhT66C1Yu2q/z44hHV049a1+/+Y\ne+8gy7K7zvNzzrn+Pp+usrK8UbV3UqutpFZD0yCHDGYYZgSIwQg2hmEwwcIwBIsQLAjYiRmWZQd2\nBqQZIZw0SCDTkrpbrbZq77tMV2VVZqV7+fy7/p6zf9zslqqB3omdmGidiIyMeJnxTmSec985v9/v\n+/t8ix19jJIS27LJdvRo4/EYpRRRlOI4DlgBg8E2S3MdLGWjjUYqi8k0RwhFXmh82yHXJdpoSgFJ\n1Aftveqm/x8O8I0xRry6k9c/+DPfcphpBJRpTDwZMBnBOMt59rnneOMVR9HdFznSCIjTEpNktHfV\nefPbbmf9+CkWDu8jThKsoiQMArqDPq7rkScXZjOE1+D0xhrnVleJkozr37qHe+66B51EvPDCCyTT\nmN5oRKkNc60WUsHevQfw/BpbmyuMaoLdrSYGm6NXXcEX77yXa6+6CtdXNFpNRlFKvbMPKwjQWYFl\nW0ztkprvs2spJJtOyJXAkYbxqM+ew4cZ64w0HuPZEmEpbNuhzDMW2i2+8sUvMducYXtri6Qs8IMa\nX7zvYa68+jrOnDhNogUmNoTThEYQcvzUSdozHZLpFKUklrTJoghPSWphndVxn1CGWN0pnRjCCK69\n7Q28w55jTSSs3vsQF938Rnp9gZIxkjrSyJ2eloI0zXfiDhtZFtW1ME1AChzHQeuqlaIo9csxiG3b\nL4MgvpGAKahkP5ZlMY6zSt0sLMhy9D+ANfpGakueV7GEbVXE+7Isse2qwe1lLZllVdjWnRqKpRzQ\nkhKDLjVJGqOU2iGIVno123NRtoUXBGjKlzNl/9j4//uwbAghdhlj1oUQi8BLkfUrLfL27Lz298Zv\n/u7vkEzGuLbijVdcxJtuvom5YBYznZB1N+g4NtlwjCSnpQWr212Gxx28xkJlKpRmuMIimfRpKMl0\nMsBXF/45jl2yd2meA0cOUSgL17N4403XQJxzw5tvRmcpZZLxiU98nBuvvJRmIMimJWUhuOHo1Uz6\n68gsxRJguR6uN8P2OCNFMlodkuUTOqVFWmhsx2UjzojWV+n2NklHY5Z2z6Fci7m5OS5+3VF0muAI\nw7RIcWyBshWO7WDKklFvmysuvoQH7nuc4+fOY9db6M2Epd0X8fijz9GZmaXTqpFGEyajFG1ChtOS\n1bObzMw0QFtsrG/iWQqTJfTXVpg/sJcyd0mynM7iIYpPfY3gWEKyZFGGHda6Z7hx8R3MHPGYDrao\n+d7LHZxCSHzPr+73O+ZQcZ6S7sjYX9pYaZoSJ9WV9OUC3zekaIuiqDhhQiBFRVLJddU3E4R1imiC\nKS/UZE0mVeXfaI3luigjyYps50EwOI4FaBzHYzqdkqYpSZIQ+FUPTVmWNFotUlHFR0qp6vqnviF+\n0jmOZfHCuXVOrGwABv0/SUj5N8APAP/7zvdPfcPr/1UI8btU16+jwEP/0Bv88q/8W7ZWljHpBJEm\nrC6foz4zz9zMLkJ/AVPmOI6mzIfkhWbP0UOsbneZ2dshS3OEVMRZRpZOKaMU3wnQXFhUshwXpywo\nigS7VJRaINEoyyYXCUyGFHaIqwy76w6ua1EGhjwqMdMeDc8j1+AkCRvZBH/OZ74TMC1GtHbVsHAI\nnIwsh6IsuajuEssZNmaaBGXBfDNA1Xw0GmFS8jQGUVILIZ4k5AlIP6PZqJFOSh559HFePHWGQoWY\nwubsmdN4vs/hPfuQ2qDzHMtxSbRmfW2dZtCkuzlgZa2ytDObE66/7BjFqIcdOMyogGfOnMeuBXwt\nOsfkzEnqT41p/tQio9ObLK+fYLTxIOlggGzcQDi3n9xMsUtJluZYllt9OoucaZwzmIwQlsKyLeId\nRzUpqg34jYyxl5ICL50OeZ5jdnhglJVzl6NUlSQpDa9M/+Q7rQFGG3Q8oXQkuRborGotfukqFScT\npJQ4bg2x49GSpSl5XpUiHMfBcmzyHW1ZWmjKNKEocxQ2g2mfpYUWu+drBGEdx3H5/L2P/aOb/r8n\ndfxxqmB+VghxDvi3wG8Cfy6E+GF2UscAxphnhRB/DjwLFMBPmH9ET6OEoFGrYQIPbQTtvYdIM0PQ\nmWWUxDz11JO4xhA6kt2HDmMM2Ic7vLi6ijaKYjomiSOyIiHujbC0ZjAZXDDHp798F51aDWyLhYV5\nXNsjTlPa83NIpfCkTSEVs/v3YbwQ6fnYYRM7zknTIa4UYEdYRlFLUo4cnketdal5Fo520FohS03D\ntclMQZom+KqKiRh2SeIJtdDDdRRFlmDJkrxIyYZD0qxkbnEv08mYXq9HmhVIadEdT/HnFxhPE3a1\nW+xvN7CQWEphScnaZEJSKupWg6yA0kgG0ymT7gAntNgaRKT9IUv+HFEcM78wz/nNNWgHjG7Yx/z8\nPoJOh4989EO8/uAx0myE0zpM48AiAgtHN9jefg7LqlBHVXarBCNwPQ+kRGuDZYmXC3vWTqbJsizK\nsrwAomfbdmVhZ1kgBNJUbmy6LMnzAqUl+hX2dMKUpHGEa7kgbBQ52DbG2OiyqHC+WuM6XnWSKSjy\nkrKoTriXTg/LsiiyHNd1iaYRvu8zHI2p1WqkWVk5rymJlA5FURAGwas+C/892bC/Z566M771H/n9\nDwMf/v96XyGgiKZYWYoaJXSnCXPzuxhGU4zR3H77O/nzT/wZt1x/Ha12B6fVrGKL2Qnd9Q3aFx8h\n6W4hQ49nnn6CK/btY2t5Fe7++hzX7j+C63nYnkeBIZjby2AyoD2zQD4c4IU1iiThhlveipunRFnB\n+Y11NnpDPvnxv+LH/9n7GPTXWV/vsbB7iaNXXUOcpoQzAfE0Is8kZ86cZWF+hkDYuK5Pvzdk3yVH\n2Tyb0ugEZImh4TqkJidKUxxLo22XWrtGlGdMkwwHyXSa02i0OXzsYk6eG0KacdHuReaEQSlBXlZ1\ni9x1KJCMk5QESaYlXqNB5Lt0JxNOnDzDJcf2MxgndHa5xHGJELC5sQk1n7X+Fm92u/z6r36Ic8cN\nT538M97ytjqOu0GafxnLvp6Z+UsRpCgVgtRoU6KzioqsdYLWlSJa7DiHvXSqvETyfymmUEqR5xXF\nUwhJkiT4jkuWVoF/aQyOpUiSC9W+RkiypKB0JcJIGgYKnSIchdiZJ89zdJnjOA7j8QTP84jjKmYN\nw7CC55UlruuidYVuyvMcIQRJkpDnKWAYDSueWF7kDLtbr7pnXzu5i84ok4TQc5nqAb6rGWyfp9Fq\nIZ2AJx64l2tvvIH2rIdROSURBWAJzdzueZJJD2FX0IJWrYbMEhZbF2rDOp4i1RHjbhfH9+iPJhSN\nOr3VhBnhsjXcpBa4jNcHlIVhezphz+ISTql513vew4F9S8h9HTYPZ9Rcm1Ll7F1YYFiMIc6Zm92F\nP7dIMhkx1+6QJAl7l/YyyTOWXneYctKn6VkU8YQ0GmKRM00n5K6LNIa4P2bUH+CHrYqemRhWl8/w\nlutu5cWnnmZeaRwrQCoHnacoXxJMU+JS4ylVIX2KktxUbK+Glkx1wnOnl3nHrW9mc2uNwg2Y7XSI\nkzHFdsQNFx+hdWlAOvgqxy5/M/3kvUSnfh8jfxJ59PUYbgMRc+u33c7nPnMnUlUBsusoTFGSGwsp\nKw9K27ZJkuRlSr7RO1QLvo5IfanfZTga7iiGp4R+gFCSMs8xlmDyCrhdZ9fBKi0kJePuJg3HYZTH\nCE9BUWXaiqIgTxPQVWq7yLKdXhxDlmU4jkO+I7osyxLbsnAcpwrwq7fGcRWCirtsJPjWq8ulXjMx\nlZEuTz/3LMtnllnZ2mazP+Xs2bM8/OjDrK6vcebkCRZmG6QTg85d0khXvQ9CUSRTXM/BEgLhCR6/\n/yG0UUzkhdmMTAq8Thvb9QlsjyAMWT79PMpM0FZMjRy7nLKr3mTY20ToiHS8xt45icpHGGMRRxkz\nFlh6ihl1OT9dpxgOmAkc4tEWcW+daX+V8dYyWb9L2u0z5/gUgwllLiqYnMwwlkWcCQw+tuMwHk3Z\n3urTbC1QZDnZZMj9D97PaDLlxaceYVdgUffr1D0Hz7ZwnRCTuXhuDdvysaULxuBIgSMkFjahY9Px\nHOZqddY3N/D8GmY6xSMnoAJ1iMUaz37tOJ/7xF1M1p/h+fv+nN//Px7n/Pp1ePYfkCdLnHz4Yc4s\nP8Xbv+N2hr0NJrHmN//wt7n6+iV+9Cd/CM+tjICyzCAthet7eIGPdGws29phHwvyvEQIRZEZhNXE\nCudoLx7EbS+ianN4nSXcxm72HbrignXbvfcQl7/hOt70ptvxfZfSUwgrIIsjkiglTwuEkfhBC6lC\narU5mq1ZOjMzeL5bVf2lwFgWuTa02h2ksoniFCEttBEoy0MIFyMgyWKSLGYcf5P24Buj6bTa+I0G\nbqfG4q5dbA/6bG722Hf0KIuH92FJiWmHJNtdQPHc8yvouOSSKy4miQqyVOOMC6658WbWooThaHLB\nHB/7uzsxesp4FEOhabRmWT57irt8j3azRadVY9AfMDszy9WXXIYUsNlL6Eea7V6ffhYzPzuH0BkU\nJTXjIZVmuL5OqTzqjQDbDchXS3RR3c+jNMGaDMEYlGTnqmBTpBm27SCFZDKaIIykudBB5xlZPGLc\n6/L2276NF194nnG3y2xYxyXGtZqkWhN6LlJq4jzHsxSFbeGVFhka37JIigJHCtCSWhgwHA5pNBoU\nRUGSxszPzTEcDfnaVx/kzd9xA8nIZWNrlTRc4V9++D/jNg7RGyj8IGf/6/aw8MWEJ+65n7oTkFmK\nj/zGz/PjP/0+fuPf/Dt8t0McJTiOR15WsVZZVo1fllXB7F4K9oUQBKHPRa+7ksk0wvc8giBgfn6e\nMAzJtWFzcxO+Yemefv559LPPc3DvYZRjg9S4foDIC4RtvXxyWJZ8WWavlKQozE5/i0thNOMkocxy\n1tfWX5bguG5Vi7NdB6lBo/HDgAwHk/6Py13+pwwpFTMzszRsh3TUhfVzTDf7bG9ssWbD7OwC0bhP\nPumRTjWzu3dz2aFjfOnLd6L1Ufx2AzeH4foa+xaXSJIJs53WBTHLntl53vnPvw+tFOgSU+RkUYxK\nYhKTUa/XSeKcJx5+kt2XXYnTDjDSkJQJuy++DF2klJbAcUISAdKp0z97EsezUAJyZdGo13GX9lMk\nEWWWok2CLhJ8zyXLEzzPpUwzhLKYjCNCv4YuKqu2Whjw/JNPo3QGecz51dOM19Zo+g4uhsC1sSyN\nQDHNEtBUQaywUBJ820ZqTazBUQ5loSuS5GTKzK4W9dDDd3YxmYzpdTdYWFhEKofHv/ooWTlkYXgd\n1138Cxw//xxHay3ajQ7PfflOxs8c5/pr3sjifgdtTfmLv/4K73rn9/Drv/x7jLoBt3/r6/ny3fdS\nlsvP61IAACAASURBVBnTaUwSV6annudXyGO5E1zvpI0znRHUmnRml/jyl76IVFWjmu/71FttOjOd\nC+44b3/b+7BUJaP/3N88RyoL0qxESIPZKURmWUYUVQmIl1LHaZoShmElyTE7LLJaHRtZSWx2hhCC\naRpTujZK7tz4lIt0X72E/9rFLBi2t7exZ2YIO3WG68vUO22sXp8Qi7S3RZEVCJMxHvTYvW+J4fp5\nXE+SZGMcXAoMniNIoj6+AGVdmIM8cunhCsRAhlSCwtJYeUGRxvg1lyKf4kgXqSEZTjBFQlgLMEWG\nsCVRqhG+xyROsSYarxOw/MDjzNRslKdoNGfYOPH8TnHRw3GsCkRnBxRFhm1XfvNGS5IoxfdDSgNu\n0MRSgkGvS6AUlmUTmwJSgyUUtu2jkRglENKgtMFSEimyHTvvEscSlKVFWmYIXYIWSFmRGxGCuh8y\n2O5Sq9WqrGNZkCZTIKfWbpImCnts4Tk2szqgLjxiMcvhm97CVrTJdjdhZmHIkyvP8oYb3sjy+RO8\n4x0f4NN/8SmufP0x7nvgQaIIlBJYygIUeb5TeZf6ZXdm17UrCYzrEoQNLr/iavYd2lPZZhiDVDaW\nsuH419etOxhgKYdds3NMoghsTZwaTBmjtIVlff3rJf1aURSEYfhynFSUBZZbac8C293piCxIkqQq\npoY+k+mUIHAR2iAwdML6q+7Y187MCMFmf8Tq8mkMKQszTaKtmGdWNhkNpoyTbeZb83jagLQ4cfos\nolbj5KlTzCwtsrHex7YdmrUGQqlKERtdWGfpJQU5BqUcdJljpTl5nuN6LpQaoxRJOcbpBLQP7SMf\ndslMSZGnyEwQdXt4ZYdQSbK6jw4EM3MzhEJjlMazLYzjkmcFWR4xnJTMLcyjpUuZF/i2iykzknRK\nEHhou2AyHCJFCUYxHg/IihTjephmh7VnXsDdobdgqiq6ES6lKSk0ICpL8DKXKHKkneOgEIVDrlKM\nztFoCq0Zjyf4dZ9wR7k7N9Opsk7SoNMMz3PYiM7Tv/MOZg/brJ97is6uH0M6Id61HnNxQGAfZUDG\nsvF4y7d8O1+75w7edMstrG33ycscpQJ8r0QIBy2qDVvoogJ851UxcDweI6RECQdLKZotn1F/wGOP\nPcZoNKbfHzCdTitV4c74v/7jf+D7/+kPMrtrF7b0UCbFpAOU42AsQ5pF2HaNQheYUr9ceMwL8XLa\n2rIssvEUrTX9KN6RyFgEgY/rOpAXGMfB7OzFsOaTq2/SmEVjUQYNrr/+RixSTFGQFhmXX38TJtbE\nMiX0auh4RC0IOXnyNIePXMSehaUKbaM1ruXQm6Y8/7VH2DXbQL9CSLn54kk+e+oEqcmxfY80LfHc\ngHqzDkVJoksarRoba30eefw/MVsLiPMU23UrA9VsSpHl2GEIQnHRxUc5F4+J+t3qATq7TFgLkUZQ\nFAWNRgsjoMgTPNcmL2KiaIQjJaYoAU0YeKytnqPINaHn0NUFNhD3htRtjyiKkV5IrhWJtsmLtKpV\nFBGBkhSyxAibwkg8A4UuyHzFJJowjEHrjIZvkUdTGs0a0WRK3XWxHJcgCDi/uUWt0WCUJmz1+uxf\nmGf73DZZLUD1v8wzDz3Ere/4OabyBUYbq8y3l4iydR594S46e3NufeetDDamPP/08zzy2GmErlK5\nRr7kpFVJ4JWlKq96DGUByq1uAsYosizn+PHjSGlIkuLvGW5VJ5KDLqpOSKlKLNtDKJs0niKRDMdT\n3NCjKDSFBiNFVZz2fIyArDDYVqUstiwLU2YYUaC1ANswSSoIoJQVZD1LY1z3m9T5SxhNx3fJNtZw\nHIUM6tjaYJItdFIyTgr8hsaMVom6BfvrIWK0ipQeWVFpixCC2VbIDd9yM2V3vSpifcP41re+Cdev\n4QYepag29BNPPMf1N7+ZLE3QsgSdIkpJmRuKLMZvdxBeQDIe8sAXP8Mtb7mFYQJ11yHu93D3HKJ2\n9TVo28OWFo35ReLuCooEy69R9Hs7gLiCIjc4toSdanI0ThGmxLIM/d6Aiy+6jGE0YTqM6K9vMdOZ\nIYpyEmERJSkmymg0fPykxEXgG0OgShK9U/XWJcq1OXXiNINJnz37DmJkyKC/Rj2cZTIeM9tpY0nF\ndDxCug6LuxbZ3O5jhyGBcpicP8fuA5KLjjaw5PXc9t23MRo8Ta+7he9kzLRmUa0ttlbWWHBtNk4/\nyKlTCT/8Ez/E8Df/kKcffwZjAoxUO3KWEiE00naq4F9AkmRV+huFY/vEUVZ5UiqJ1gXGXJiULYuS\nIAgY9LdxLRClJjUCkZdIqp78tMjJ8pIsy7FtizIvUQiSKEJCZVRbxkgpsZQkLRVu2CBJKh8fYVk7\nxVKJp22ka6Hsb9IA31Aw26nhlAmjpKDRcCioIAWZVJzd2GLP/qOgY4a9LvXOAuM4QTglk+GQmcV5\nBGBkhO969IsC9QrVcbPVIi4SUp1gI7H9OkoLpmlEPq0+sTytUZToNEcoB1PEZFGGbbvMzsyRpBPc\n+jyWhHqo8HSDIitwZIqFYfjMoxDa5MqQd3sEbqX1QpZoSiylGE0m1MMaZZGSxymzM4v0e2P62z3E\nDls4n0ak0qEQgheXj3Pk8FFwPc6snGNtZYXdc7Mc3b3ATOhh2Q6jacKwiNjaGrO4MMNcp4VvCSzb\nsLh/iY3xFMfV7J5foCgyglqNHE2aRrTrIVgOvrBwzIRsvcfqgzHjxl8iWzNcsXQJI89lkm3Q8prM\ndRZYXHobrtTk6Sk+9kd/ykynwbu/+708+8TzaCojoTzPUZYgKzKKaYnQJa7vMzElBkVZGISlmEwm\nleGroTqJ8gs/0Y0GIV2EsYijuOqudNwdqwlJnBdIy65UDZaLNobQ9ymKKogXjiIfxwjLcHp5HWNJ\nvu1t7yJohkhdMuwN2TzzLMLkmNImMzlSK3T0TdpWLKTk3OoaT5x6gbl9eyi3NqgHNUbbfV549jj1\n5gwfe+YZmnbJzMwcZ9Z6+GHIuNSEvs9oukJ/NMCuUv7kk5hGvXHBHJPplFq7XREsTYVaGo5jfG3j\nzjTIoiGIJsICx44QScnK2TX2HDoISBaX9hB25sANMFFMkibkpkRIh0mWgcmIdEkDH8sYkIJ40iMM\nQsoiQ5iU0bAHaHSZM42nBJbPYKLZNTPDubNnsWt1Up2jdQG2pF1KDh85wrQs2FjvMtgc0ah3aAYB\nXpHiey2iwmKiC3qFYHOYQdxjYzBAGji02GahXSewLHq9bUaTMdPpmPmFWTzbruQoZYpje+Q5lFpQ\nl21m65ew0XuI66+5heloilU7z0xjkf5WFz9s0518gj2tD9LdvpuZmSa/97u/RRovMJxKgqCsoHay\noki+xAjLy5KsNOiCqgygNcZSdAe9qjELgdE5rxSmK2mhhE00GVOrNaqemrJAKJCWwpEKy1bk6Y5j\nsjHoMscTFslkQhkL8jwjtQXDXDKJC/7kE5/m/e9/P3fefR9l3kSqvRRGkqRxxSkzJZhXB568dqlj\nA7sXdzErDcqx6U2rKvjlV9/IwoGLsYqCvUu7kcKQ5RmT0RhlS+pphJNmlFnO/ksuIo5Bi6p91Zbm\ngm6ax58+wYFDh9HTEcPhgLAVsn/3DM89/RA2JcMkp9/vs7yyzqF9+7j02GF2zS+QbPdxZju4nssk\nK0lGfYgi0qlGa4MTVp+iG8sr5EWGOFcQOhahr5ibbe3cfy2yOKMWBOTJDhNrx6Yc4TDc2kIlE4yU\nWMawNDtDvz9g1neZRClPrvSJ8FhOLZxY05v0KBZqtOqzBNImLQ2n1vtME8P2YJvUFLi1FltnNrk0\nz9jX9CjimGg0YGFhhiyJcJwmURrj2A6IqlhqhxZ9pjy1/igLr2txfm2ZjRWPkq/Rmb2E2fZeLOkT\n2lNKdTf3P3yS9/7g/8qffPJuymmJdGP60wlZmnPrW97KqZMncbWkoODd/+R7Ob+5wdpKF+GVUBpk\nKhkMthFoEBWf4JVttDpPkWTE002yNAVlUeQ5RaFRlsRxXKZp5XVZb7SoBz5KKAqlEK2CzY1NjOOx\nur5GaTtIAXXf4a8+8TFi2kysWWSpUVaKqyoPoFLWKKTPq43XLnVsoB7Wqe82IDUH7TlwHaLhgCga\nsOBb9DfO4IQdwjDAU4ZiOmEyiTl25ACT1WVMNKYmArAqz/YyiS6YotauM1cLSIuIuYPzCGkjjMc0\nmuD5ksWsQB3dy+F9B1g/v4JvYvKtFZQGXUwIVU5ydp1GYxZT5tTaDb585x3ceM1llFHG3vldGCEJ\nQ4diMsB1JUk0whISN3DI8pyszPCDkLVzq1hK4liKtZUzJL0+tuOCkDhSMO71qHshxnE4eX7Aid6Y\nqR4T2ZJ99VlMWRCJBv04o+G55JMIx68zysfsbc7j2Qqd5XidFq7MkUJTr/uMRyOkquoegV/DkQ5F\nVgBRFYA7gqNXHGb36xeQqiSPW1x5tcVo89t5YfM8q5tf5NhRTSN8J6J8L+95zyJFIXnHd74Psgmf\n+evPI5ih0WoziWI6c/PccONNvPs7v5P1tXWOHDyEbDYpAKkFo8GYtbVlRv1N0iShVvcp8wsfl7mZ\nJh//r/8PSRKjjCbPStI0xnY9ymJMksRVihxDkZ5Gm5ISQylN5Tpsq6rdOAgQhYNOUkrhEI/GZKIg\nDzvIdEASn2ZaTHBFh3D+dUi1wKuN17DOIvDDkF73PI1mQC4N8XCAE9pE4xFOczeulMTpFOVZRMM+\nrdkG5WgbsgwvaJAUKb4vmCbjysH3Fcd57/QpzMIcrmchjCCNE4wSSEcRDQcErks5TpDGQClohG20\nkMRxhMkysjwjCJvk05gkmeL6bdJhDFFKuxmQGYmtJdNRjzKboguFzkvwbTKtUbaFJKd7/jyIFN+t\nkfaHiGmPsB6Ql4Ykz+iubxC4Vd1gnGhs12ax6YLyUDn4lk09CGjZNoHjUJM2HbdBoyyYqhSdldhG\nUnMDpBIEnoNwDK6eYAlQUlDkKVE6wfd9hFEYYXAdA6VHb3NA/4UhV139NqQbk5YPINonsTZez2UX\n/SSnzv4Fs0dfR16c5KH77uKLf/kijz37BX7xV/+YD/z4z/Pe73gXRIp3f8/38sabr8OSFqe+9gj9\njTVefPIRLL/BpTe+nrm9+2nO1PiXP/NTfPHLd9Db3CLLU/JX9LOsrKxhpMZ2bIoixws7HLvkMm79\nllvYt/8AjUazQrtGOU8/+SSf+rM/J4tjSqsgydIdILhEGYVQ4Dl2xSDzOgiVIpgjj1+gU68hVIAu\nbWLbI416r7pjX7vUsTREScxTTz5FpDXCkYgCDBaXX3EF9z/8JNvjHo7lYSU5bd9mFA1YG0XcOf0C\ngeuxZ/8+1tbXkFRSk/IV//QTp06zvdUnyzJqYQgKZL2FKUs6oc9k2MO1XCIhObfZ5VzvPIuLu8ny\nAk9AI6xRRglzczNoS2BFU6K0ZHl9E7snka6Pa/ukUYSlSopsymynUxn+FBlKiEri4Tgk8QjPdRjG\nm3iuw7nzm3i1OgcOHuPJe79GU1Qs4cBIjtQ8DvhNpOWzHsXEWUmoFDNBQNN1UIVh3q+zko7pBy5y\nx4UXIXFV5d6lC43BkKcpQtewLcV0OmFmdob+cIC0JYHnoUSdaTbk26+b4cTpj3HwwPuIRrczHN6I\n37yfSfQ4lx/6EXrRXTz2gMs1l1zOZ+SXuOr1l3N29Qv8q5/4Icaxxx//we9R9y1OPfwMu/Yc4PGv\nPcNFl15Eu7lAFI2Ybqyze8++ytbOtfjYJz7O937HOxmPReVn+Q1DuQ7XXPtG/pd/9VNcc82VbK52\nWV/vEtZd4skIrcFrNNhz+T5ueutNvPu738sv/exP09/cIkniiqVsDEmhSbNkp+++JE0sRCCqqxmG\nPCnQIsOWAilsorT7qnv2NcO3pjoh3e7C9jrGsdlYPY9yHeIooeHWka2Q1t69bK6s05GKlaceQYqc\n+cNX0JztsN3dxlKSwbCH7/ssLy9zxRWXUfvoVS/PE73/KUxRkMQxQb3G+vk1li6/kmh9A1PEOI7F\nNMnI8wwHRc31kCanu7lOUQsZdfvMzjSo2TZW6FMYRVFmBKGPsm10alCOw8MP3c+h/UsIMiQZstiu\nbPrilFGvT63uI/2QZDRmY/kM25vrWE4dbRyk9HjqnvuYV2AFIaktKHOFwCOONUiFNhbNwGM+VMz5\nPqKQrAw0D6xt0fUhzjNcY+NbEkcJXEegRIaQCVqU1Fs1LNui2enQmZ0hShOQJY2GT55CZ9+UcOkz\nLCzeTuQe58HPNbjt7b/KfQ/ex0XHerQWnmPY/y4OLbyBc2t305mfwypez0984Id55vH7mBY1/vl3\nfxct28VWHaZRyiAe0+w0+NZbb+XOL/0dt7z1ZvZdeQ1BZwZDSTTN+Rff989YOXueKC544vuffHnd\nNn+0yyf/4u/46n2PMFpfJ+7FNEKbvXO7EcIFqQgbTf71hz7Is8+/wBtvuQokfPavP82f/N9/xGR7\nQFqW5FRFyl6/y+xci34vw3gWsvYWZP8eMAlJMcESPt7Sm9jYOMH501/95sO3gsEOfMqeIbA03kyN\ncZqx78AusjSjFw2wsjmiYQ8PzUJoY5IEN54S9wtCJVEUiNBhONhmdfkER/fOXTDDxolncX2XLCs4\n89wme/btR4971D3J9vkubr2GLW1qtiHPxyBBxhNma7BhEtZXXyQ0c1iuzdrTXfbs3oXnemyfm2C0\nIZumnB0MOHzxMVxKMBlFPiKNB5CXZNOkQj1RMuoPsYxBioL5XbvY7I3x/BqP3PcorlYoVeBKTWga\n5MpinBmUZVFTEicIqNk2LV8Rmwzfq+HXLDqegxY5iRNgaxvHkVi2QImq18OybbKiJIoiOp0WyXQK\nzRa+61KQY0nYtc+lfmzE7P6LufNvQ9p7C975zt2Uzt9xyaUBQbhENOxwcOkW4knKM8+e4S3tt/C/\nffit/OKHfo5/8u7jOFnCrAtJHLGxnWIVGSMT8+yJJ3j3976Td3z/9+Aqge25pDu22q5r84M/+iNc\n9rpLefDBJ/ie0+97ed0++4kv8Td/+lmGyZSD821mQ4elRo143CcSFgiLeDLgD37lD/jAz3yQ8UZK\nOO9x+3u+k3d81/t425tuRU8iBFUXp5IuQiqEFNhWrSolKEGaVoazStkIrbDk3yN2XTBeQ96prGAH\nXo0SidVqEzaadLeG4PgobZh2h/RXVgg8j/bCHLVOC6vhM8gjpOcw2R5RxiWBqtNZ2E97Yd8FM7Sb\nLSyvCbZLzbc4v7GGDD1yZbBdhzgtKQvNJDckpcDkKaPJiK3BgByLJM9ptmfJ85JGy+XkymlePHcW\nWQrOLa8xGI65+JJLiJIxo9GQLE0g1zjCw2QJzVaLAovpqFIojIYjmrUF0tLGs0O2zp3FZGNa7aBS\n5xpDy1UstVscW5rjyFyHphfiK5d4EhFPpggEjSCg5vrM1mq0HZdyOqIjE9qWpibBEwZHSnRaYtkC\n37dRjoOkxIiC8WBILQwYpyXPnl7l2cc3OP7wpbz5Xfu55KjLytkvkFj/kVZ9Blt1GY7uoExHnF15\nkF3zbUaT3+GDP/IDHDpwG65lc/MbriQtNBsb2+g0ZxCN0bYgCF3+029/hIf/9lPMLO6iEGVlq6EF\n21nKbW+9nefuv5cv/YePXbBuj33yyxxeaDFTb+IYgTY5WZESZWMcneOUhigasPzCU1gdyTMPPA2l\nwShNXoz49N2f5fU33YTJBK7lUZYaiY9lIgoCTDlBiJQqR2AjtSAVFpTfpP4sQggMijjP2R4OcX0f\nS4Ezu8gn/ua/kYyHXP/W21g4cpCvPPEoV1x6Bf78UYbDIcL1uPu+B1lot+nFU1q1FjNLe3jyxJkL\n5nhopceRg/uwF5cosgnlcEj3xbNkSQRJBLbDyZNP8vyJZdxGwOJsnT3tNiK0aNsub7jpJmq1gHCm\ng3IDio01yjjlsWef56Kjh4iSlGBXm3Qrx8oLlMqrnvUdtS3GYNvODlPXIgxD4t6QRj2gFyUURc7u\n+QXkIKKmHDxh40mB7YBybITQ6BL6owlZPObAwl6c0Ke7sYznNfGtKcONNZphC6kjHA2W61JISV5a\nROMMZWuk9EBILOWQ5zmObzEdTGi36kjnHIcvmnDv3fdx+A3X8tzj8+zddy35+iJG9SGIOLbvZ3jw\nmfdxZM+7aJWCpx/Nac6N+Xf/579A+RPm50JGw6o/PytSSi9gbXOj8tTcHiGOv8CN6xvMHzyEERJp\nDLXQZXL8HLe8/3s4evFV/OHnPvnyuuWlRVAWLIWCU1s92rYgd2zKKMUENihBlCUYUj7yy7/G+nKf\nZ08c5/0//d1I26WU8KHf+TC/86u/yT1331ORaCyFctsU9jyIJrq+D5GBLDS5UhRlCo3Oq+7Z1675\nyxikUgil2HfwMDNzcwS1Bq2FBd7zT7+PH/yJH+HQkb3snp/ntre9g9nF3QT1JocvuYyyyLnx+ms5\nfOwA1152jEuO7eOiPbt43VLrgjm+5ebLCMt1GsPn2dvMOHI4oOn3mWtFzMynzM9m3Hj9Et/3A7cx\n33C5evchDjfbzKYSPRhgxwXrJ8+yfW6TlRdOUrNcvvbQY1x73U3Yrktjts1kq4ez4y2npMRQ2bkJ\nCcNBrzLgKXKCWojWmiAIGG5vs7gwx0yjjjEl/dEQLcDzHfzAwRYlggIpcpJ0ymg6xHIUg+E2L544\nySiacm77PDma2V27sT2XTChEkaDymNC2cO2Kxug6LkopWq0OnlujKDXGZAiV0Btuc3a54PBlP8to\ntMgdX+iirHMYL+fOzz+GV2tzfu1O+uVnufHKT5JOXqAs27Q6l/OXH3+Wuz6/wrRvE9qCelBDl4LE\nTEmLiLi3TduxsWzNsSuv4tzZ8+gcYgTaQL0seeCP/5QUOHjlRResW6vmU8Q5HQnSkgjPZ1hkbKQT\ntvMR65N10jLj9KDHC488ze3f/3bOH1/nt3/hNygyDQYcS/Jzv/qzfOqO/0a91ajImMLCtmxQCZk6\nSOkcwPjHKN3DSAIawe5X3bOv3TVMVKrayXiE0RmCAj9w0GSMtrcox31UXtVNTJRh5QVWMiaxUqwy\nxpUFHikqHSPiCXrYJR4PL5yisUCOojAaWWuSZwLLeMjCRZYuZa5wpUuawMzsPFkx5Xx3C2MJ9GBC\nvD0gUD7TQYyexqycOMlsu04Wj6vmLiGxckMZa5IkwVI2xpQIBLblEtYaWK7P0uIiaImUgixPaNTr\n9Ltd2u16VY+hJC4TClVZJORpzGjcZxpNiYZj8mRCs13Hdj1m5juklstGotkqCgZZwViXlEGN8TRC\nFwXoHC0qz5GiLDFaMOj1KAG1w/8tS8mLK+uMjMVX7/kMx6cvkCf7uOIKzXNPHue62ySby2fYXf8g\ns+GPkZgtOntjnjv+FS697N18/nNfJUsLdJnieQ7DYZ+yLKnXLXr9NXxbsHXuNLtbIQePHOXya65B\nOjbBTudxZAxv/qkfwhKGIplesG5lMWQcpXjSYb7dZJIMON1d5dSozzPrZ5jkMZ5rYzfnGU0H/Jf/\n8id0J2ucf+wkvbUuyXhaAfwsg+UovvLQPcy1W1giw8qnZKOTWIM7sUd3I7b/Frv/VcrsNOONfxBE\n9PJ4zR4WbQwCQRTnnD65zKkTp9ne3mbj/CrpeMqZM+usbwzor58nKmI2N9ZI05Kyn3D85DmeO7NK\nv1Sc7cWsbicMhc0kvfDO2RtOSI1FnJZ0J4aN7YRTa1scP7XCqZNbnF7tcfzcFtvDIeMy5cnlZZ5b\nW+PR02c4sbXOydWznNo4x/LWGmvbXTzPI00TTp49y2p3m5OnzvLC2honz6/j2CHTOCEIXIyS5EWJ\nsl2U45DmGUgLoQR5GeMHIUWSMuoNSOMpruPvqHYLslwSxwnpNGJrNGW1t8HMUhMocKRPmhqUVQNs\nnEzjjxOasUaVhrLdZn04QggQSpCZBEwFnPM9F+laJHmJstpsbA3xF1YRQYSX9fjArft40+WbDE/2\n0XnKHXeMUPWYqHyEbvcrIB7Fl7fw0T84xbVXXwK6TpkL6jWbKEuphQ3m2k327WpQq1kYR5GjCWyH\nycnjDKcbDEY9clNgpMBWDv7Cbkqtefwz91+wbireItM9TJ7hCDi5eZrCElx27FLmw1k6Xp1ZCUv1\nBq7jEA5ybv62t5BOpvzKB3+aMGzzr3/k36CMVbGSmfLHH/0TrvuW1+PbDZzCoWkVWFhoUaCNja3a\n1OU3qZASDNqAbVnsme9UokPfQUoL0Z5HCInxQ158/AF830KVEaMUnLHktne8C5zK+3Dm4GVISpRt\nM9zahDu+PoNrw9LBo0h1hFLV2HVUokWOzEvuvesrvGH/USxHUQqbPQeOEYYuRX9Arg2ubdPtj7j/\n3q/yttu/DTGJyI0mbNQIPZ/tzQ0Su8RtNzmx/CKthUsx5RBBQtLv4WiBMCW6KFDSkEwjfCdgWFQn\nqmVZ9Hp99h86zF2nH0YYw34hMCYnKwsyI1nvbuEthoRByBw1tp45xcZqj6jQxNIimSbYjRYDKZmQ\n4C8EHJxrVw4DWAgUSiksxyKJExwlcVwPaaDVbHPt26/ij37/EbZ2nebSQ03uf26FO/8i4tf+/Ztp\nr53gxItdFmeP4Ng9dLbJb334CR5/PMPIBlGikFJTmMpGwrUlSVkQT3OO7tnLl+64ixve+AZC14NU\n8/SX7uPiq67G3W0xsiWtZpNCGvzCoXtm7YKdIfUUkU3IiybG8pj3OxxsLEBUIIwmKjPcQtIqxxy+\n9mrWukNOfOFRptOCdsPmgXvuYbI25Q9/7aP82M9/AO0JRN3wGx/593zkt/4zX7yzT75tozKBLSwK\n2wEjkOLCFo9Xjte0gg+VpqtIItI0IRQ1hqMBM41ZslwCknPHj1NbalMIzcyuo0TxCpO1dZQTIoRD\naYWobEJaWozi7IIZHEvgeA0mkzEiHqOERvgKt+Zz3RsuwUoGFMbH0SUiz0l7XRwkKstZH20wn4xr\n7QAAIABJREFUwsMNm2yuncGRHou7d1NkKa2ZBnkypjg7xgoLNs+dY7C9Sc0GVEng1imLgrAekJQp\no8GASaJJsQi8GmWRU+SVbUaUZqSlxzCNsR0FCLSSRFHlrjtYTfn0A/fxxssvJgwEu64+ytyeAzz2\nzPPc+OabeGTtPLUo4/E7Ps/N9QU8z628WhwLx7XwPK+is/gOUiriOKLenmF7PeOOvzqBY+/lviee\n4mNfGaBrh/ilX3g7wzjnUx99hsuuvh2jx7zuokUef1By/Ml7+bVf/35+8Rf/EC00xlhsdRNG20Ok\nitm/7wDnls+ijeHSSy+iSDK+75d+ki99/kvkz69x/OyIq95qkfgu6uo2whRMeiN6mwOY+fq6TScD\nnHqTTFmc317GDUtcMyXPBa12m83e/8vce0ZJlpR3+k/E9emzvK+u9t3TPd57YGZgsIOVQRJiFyG0\nkkC72pU5QtJZkPT/CwkJ0AoJs7III4RnYBjHeG/b++7yJjMr/fX3xn7Imp5pVnCkZc+ZjU9VmXUq\nTmbEe+ON1/yeKk4WkBlqiysMT42y/8BBNl9yMYtHnuOTH/1rxie3UT/Z5Y6/vpfF9hxyUHDVdVfz\n337zPbzvPW+h2a7zjtf8JJo0SC2HJOq1DPyo8bIZiyY0EGqjLwXQNFzXJ18o4YUeMZKcLEDUZWBw\ninpjjahRI2vbIC0UOoZhIdIQYeroqY7otM6ZQ0mNwIvA83EyJkEgoJOAndJpR5QGxtCkQxR00aVP\nhE82n+O5R55l257dDJZGeeLAcYTSGBgfwotiYgRz80scPnGCnRMzCM3AVBqOk0FXvWJBJ5uhE3Zx\nvQgvCtENia00CFPiJKHTbQECK1+kUmmgzJTVegchTDK2jvIEmbzOtNMHCx1ONVp868gh3v/rv8KA\nYyIMh3wu5ZTuURUpD913P9s2bd84RSw0o6ezlc/nCCMfzdbJOA4xPRBpFHXI5iVVV8fp6+AYWX7j\nnW9ET9bYs7fDqeXrCNQ/Ue/MYRtlqlXJpdddwbvDgJNHDXbv3c6zz82hVIpuFJnasYfm6izdZoUU\nyOiSQl8ew87zuT/7ND/75x/CFDk+8qu/wVXlEpXaKlEY4KoQowuY527SNuAjeGL+KPNejasuuYDO\nqWUcLYNj5wmDFNdWxHpKp9Vkdl+FnVdczKnTczRbbZSK+au//XWiNKZYKqCE4tiJeSorVfrzFZIg\nYNOWLSSa6KluygxhHPcgrD9ivGzGkqQJhiaRms788jz5XI5Oy2VheQlNpaSagbBshnZfy3wClW6A\n6YacmZ+llM2xXKvgdlokBCSxIk4k45OT58zxxx/+CGMjg0iV0G5VSdKQvr5h+odGadYbmKZNcWCE\nRqPKwWefYWpsCiP0mJrZzuH9xxia8Xnbm1+D7ntUax32HzvNviee5C1veR0zu3Zh5IsU+kpcn7dQ\nmsFapcZIfwHPb6F0g3a7hQZ47TYAjuGwUmtRKJY5NXeSJFbMz87juh5uENPwQ2xDYhkWCoFlSVYL\n8JPXXs3+ux7n+T/8PO1xnfzQKIvVddx6F+Fq7CTLpsExwqQCUiB0SeL7uG6boZFhgjTGNExEKvBF\njApCdDNPJ9Bod7PAGJr1OjLZ/azN+7TaX2F68jxmxid5dt/z+JHJ/OnvcO/dj7N4SifVIuJI66Eo\nSPj63ffxivO34hgS33cpDpQxdcnVN13H1IXX8unf/yMuuOxKrr34Ak7NHuX+b93Be6+6jCI23/7i\nP2A751b7xlJnpVbFMwXDfZOsLtcxg4SBySKVMMDI5ThVX8MREpEqrKzD8/c+zC//7m9y1x3f5+hz\nB/jQL/8Gr3j1zVx6zWV86QtfQ3mCTaND/MEf/gXvfNfP8MiD9/Povsd59TXXERsFulEb0h9tLC9b\nuUsraJPRdFq1KjZdkjAg6Lbx/JAo1jh0dJbrr7wcjF6nW+L7SBJSx+HUycP05/LQ9eloEr/VJU16\n4gkXPPqWs/O033OAk4cPMrN5BqRCTxIiFGamF0IN/YAkAStf4t7v3sHOkX4GLQsBxGFCN2yTMQ2I\nYk4ur6OPjbNaXWPv9kkyFkRuD/IT+AHdbpexkQGEChGJS+B2SLwuie+Rhm0kkrzlML+wCmicOTZP\no91lfnGVuqtwXY+rN5W5aNMEpm4TpApdM1hqh1ixxlhumPWFGq2nj1HperQUeAYYEyOYE5Os6T62\nHtGXs4hVhCt8UBGFUpFMIU8UxeimQ8dvYts5MkWDoZ1lPvO5e7n08pt4w8/8ImVzEMu0eN8vXc5F\n2y7kW99/gAsvuYL9h06wcnIBP5AErkmc+ijVC/8rJCR1fvHNr6YsA6x8P/XKGtMTk7zy3T/HX3/w\nz/jA7/0+//Trv8Po9AQLScCpp4/wqrf+NLXVFkGtTWa4n5/K/bez6/YTX7iUtK/M0fkzWPk8U5PT\n7NmylccfeBDP64JhEyooZzJYSHQFIkoZ3TLFT//G+3n/u9/HpUOTGIZNIARTW/by+GMPMDCS5R3v\n+mkm9mxneus4RrZIt7nOra/7T6yvR2jpGQ4ff+7/vNxFCDFJj/o1RK9L59NKqU/8uKg82zAR9ORq\nZBoRuB0cS8MwNYSWQ5+X4NVJugGGVcRdX0fgYuSH6Ha7DOWzWDJBdVtkdYkwNLL5c9U5rG6N6cEi\nQvU4IEbWhjBEhhEZ08C2EogTnn/2MYp9/eQHBnHbNQqmQ6vuovwQX7nMnV5kYmqUzGCJRLXQVYLv\nCXTNpra4ytrqKsMjfaRhSCpCMlKiNA10k3W/heXkif0YP7WIUoNmrUbkx9SrDQIvRAgLaRhEqhfB\nG+63iCMXQ5n0WRqJAR2tiz2dwx3Yg93xIYGcFLhAIw2RoY9hm6iN4s1Abij1b7Abc7kcbbeDbVto\nmk3qwuyh04wPlJicKDJ3dJHbn/rvvPu9O7ngQotXvuI9fOO7T/Klz92B0Bw0HKIgRUkXpcTZfi1B\nRGLmOLm4xnU7JzBIyOk6/soqX/rInzHgx/ztL72PWnOduXYV37G56ZXX8vx930FIA0ca1MJl2PXi\nuknLptNo0+84ZAfK6LbOV++6k8u37SYbenhel1jPsbi0gDBNPKEwLJ2FhVn+9Lc+RE5qrIUtiqZD\nFIQ8/NhdmCLhoquu4fp3vKYXUhYpoYzJZWx+5ufeyue++DhBmIfjz/1QW/i3hI4j4D8rpc4DrgR+\nWQixixdReduBezZ+5wdQea8BPil6zQfnDJWExIDp5NF0EydXQDcdzEwRra+PyYlJjKxB7Me0fJfy\n1BTCHCBMIqLKOnG7i9I1QlIy2R5fo/uDioK5HAdOztJ2ffxuSKXdpu6GdJOUmhdxZrHJnQ88zeim\nGUYG+jk2X+HpM2vcd3KWg7U1npxd5uFj89TjkKdnZ6nUKjSbHZ48epwDp05x+NAxnj10hNNra6y2\nukSJIvFjBCClSacTkcmUcEOwnDwnT50gcNtUl9eIkxBDSob7SgwM9rNeb2KnBoHvUVtvomOQpgLd\nBNMQRH5I5IeEXhc/COj4ITU/ptr2CboxQpoYhkMiUmI6pEEXO+MQxL3OznrbxTItVAqWLZDY1PyE\nkyfnyaTLrK6e4Wd+4ePoxgf4u88ewrZytLshOiYE9KRSCSGRJKkiUhAqiJQkCOCeJ55joelx4uRx\nUi0mtXWGYh2jXsO0MiyjsW3HBfj1Nvc98DjP7D9KvbHGeqtBvfUD+TFLoxO3aQRdjp86zvLiCu/6\n+ffRDUOabRc90TBUyNj0JLHSyRhFwm5MaWorp8+cwdAEKQmza8ssVmYx4hZbto/z9nf/JJqhekqm\nwsQkAV3j53/uNcSdA8Ttkz/SEP4twuArwMrGzx0hxGF6OIkfC5UXRz1SrReEnDx0knplBaIAwzYw\nnSw7d++h3WpzbH6ZdtjB9xMEGhMzO4jLExxa6/YUEA0T0emQxAn6DxyeXT3H3qtvxPd9YhIKtoZm\nWkR+TGNpBdfzefXrXk8axwyMZNh+sUVYb9LtNFlammPbOy7ke9/6Flfu3UlBs/CSmLHJCYyMQ1Bv\n01pY5vztu0DXOHxgP2mYEHsu7RjarSZGItBFj5TVbrdJ44Rus9eM5QYBmVwWqVvMnamgSRNhmQiz\nxxRpJjGFfB6ZSAzTxDRs2i0PkejoQpImCWmUIGKJMCSWZaIZGrX6KtmciWkaPRX9UnmDMKYRBgFO\nLoOpSyLZIXQdrr9qinJfg8svey1JKkhTnS3Te3jljddi6g5prPXQD6RnVViUEIgeUw+kxCFFCZuv\n3P80m/vKXGkXiRyH5bqLM7gJLWpyZf8U/rF5BvNlvG6byYECjgzBKbPSPNdYTi7O4ScxOcdh1+AQ\noZ3lu9/4Km4acvUVl/H9r3+TicEB+h2L0f4isZXn0PISteeOk82WqTVqhEGAYWUYmBrCzJp88DMf\nwynliEkxZQ8FjoIkURimQckQNNxzmwf/3cby0rHBlryIHub0x0Ll6YaDVArbNtlz/l689REC36VU\nyhOqlKRTx7Eddl91EaaQhK0WKo4gjpClHO0uqMDHKRbxui7FQj8nT505Z47ueo3y8Chp6GMaWRxH\n4a42OLTvMFvP28VAX4H5o0coOFlIJI5pcmzf8/jtNjM7t6P8kLwQmGFAPfDIOA5HjxwnXyhSW12n\nP1dg6fAJUqnRaQcszS9jGyGarlBxQrnYR7fTIQ1jhEqQaa+/RKmEOJUo3WHf04ewjD5EN0UKDdM0\ne7zFKGatWsUuZMlmoJA30fSYYt4iDRStsIWtgWXqiHwGLS9ZWj6F1AOEmScJFHbGotvtohsWmXyO\nvJMjkZJsIc/6siCfJmyauJiJmctpd0LKgw5SSI4cXsEy8sRx2nO5hNYTxJMv6nK9QNhKlEIqSSgU\nDReO+h5vuWIvh44cxCoUOTp3BiPqcNPUJpqVFnqmj35lIpMmpi6ZiyP2nTx+zrpZls1IsUjWsHCU\nJAhjkiDhogsvQUQWo0OTDOYytDrrrLXO4KcmN7/yNaSexHVrLK0eoVJbYmggy7v+y/t41WtfTUSP\nIqZvlNv0PkQvSEEKlbVlhPzRumH/5gy+ECIHfAX4gFKq/dL3Nhgs/z5UnuqBMTUJYeoRxC6q3WR9\ndhbL77A+dwJHS8Cw0cKYomUj0ohSIYdAMTo8hIagW2lQsBxymQxheG40Y2R8DH99laxMMKIOK8uL\nnDh+gitvfgVnlhdZmD2FnkI5l6VdWeaxJx8iO1zEy/RE9NLEI3UE+f4iw4NjrM6vYGgmepiyadMW\nFpfWqHVbHDxxmlYY4aUp45ummZiYoH9gCNO0e2ILKkFDRxdaL78SWzRaCc/tO03YBsvXKSUWUb3T\nw11rBqZtYdoW7bbLysoyS8vzIEKiuIOKWxh6hGlECNUB0aXeXEKzQTNkjzti9JALmtTI53JoSKTU\nECgGiwNMD+/gor0NDDGEF2yhUCyiSw1ESuinpLFBmmiA9r8tn0h7hqJET0shSRJIIE4idBSRGzBW\n7KdfN7jwvF1UPMHDzTr3rS1zJl/ieSWoDA9wd6XKwdUVBkbOrcka6SvTZ+UwE0nbsFlzQ2589euR\nRo5v3Pd9Jq66BDkzzIFKAz2fZ3JgkPl9+xFRxMrSGSwNtu6c4VsP38UNt9yI73v0lOoUQkniVJCg\nECSkCr7ylW8iNPssFfqHjX/TySKEMOgZyj8qpV6gfP1YqLw/+IMPb/wkufyCHdx42W6M4T7a7Q4r\nzQa5TdOsNKrY+TK6YdJpt7CKfaw3mlgZh47rYQiN8kCRTttlzXM5cPzcJ9QTDz5Ao9kljhR+pYrf\nWac8OsL6ffcxf2aWLdu2Umkvcv8jDzE6PMSmndsY7BvsNWE5FlGrzVU33Yqna6ggZFCbYDxXZPnA\ncYxSnkoc4zZcpITxmUnO27GJ+ZMH2TxeQqSSUKUYhiCODWprFaIYQi+BVKOV6FQqLjNYUGkQGwlr\nR+eYnBkgThS6pmNksxQtmyDq3VMaLR8vBTcS+EKSmFnsgUEilZAEAVY2i+e6pImBrgk0aWPrDijZ\nK6jEwPd81hcXCTpLhKsapU1NMqbAtARpGiLQMWyJ68UooZGotIeSQGy4YT30NwrUxiNaSYH2Aqkr\na6DikOFNk3z7njuZ2b2DkXKJuJtSKhTx6lVaYcSjC2fwgy4Cg02DxXPWLQoC3FRnZHorh06c4id+\n6qc4cvAQTz37PG99wxtYWlnl2PEVbn3t65h9/hB5K09b+JyaP0B/wWFgz3Y+/LE/ItESdNWDQCUK\nEiFRQkNHEcYpUsIv/Idf5P57DxAECwh+zHIXIYQA/idwSCn1sZe89WOh8n739z7Yk7FJYX31GJHy\nqdbXcbJlTCdP2GyTKh1NE/gWWImNLBTI2VlM2+Lphx5k77ZdVOs1+rZuQ6UJbx2Y4Bf+5c/OznHJ\nzDS+Sjl0+DC7LrwBSOiGIV6UMDUwiK1ZLNQbvOKiy9DyJkurSwyXC5TKDiJMaK6tsDx/hkypwHOP\nPcP2zVOMj0/TWa1z8OlDxJ2EnGaAhGSlyoNHjjEyWGA1ibAdg1gqlCXodEPCSFFdapL6GvX5KtWV\nLoQQJl1GcZhHMOea7CGPKROEiNCQpEpg2DlSI0HTNfSo18Js6yYiYyNMA89PiDWLrOWgmz3dMqHJ\nHq8kTZFaT83eyhTID49hqIiVbpvhqVFayiDWQPYC83TazZ7CZNpD/ynVc1/gRddLyBf/d49BCQhI\nFSyvVRgeGqGyXmHz1DT3PfIQb3vlLXz37jvZfN5OTlbqlIcGWTx6hDfccB3zyys9yvNLRn58jPzg\nME8+f4g3vuEtPP7YYyzOL/DTP/k2nnzmGdbWarz6lpt56rnnWOm67B0cQsUd9FSxd/s2Lrv0AmYf\nf5Zt115Mokl00ZNo1UWCFnvIVGIKk/f97LvZ9/wC5ZHzSVsheuKxuv7D+/D/LSfLNcDPAPuEEC8A\n936bHxOVJ0XPTZAIIiUo9Q/TDRIy2RzCC1C2hZaoXmhyvU3b9ciRkCodXySkiU/sd8lZWdKOR4oi\nCs5FTnRqDY7MnWbvhRegghglUrQkJqNSFuaXKOWLbJuZxkgVXhAiRY+/bmg6QRwRuB7HDh7Bth0u\n3bOXMAl49rl9eJ2QOO4JyKWeTy6bpeMHGPk8y80uHT/ivPOm6VaX6LY92m0XrS1wYofKYgVd66Ng\nmlTSEF+TmJ5BmI3QMkWe2bfAhReNYyqFrnQ0Erw44cTcKok0GevPo9sWUhg9ZX9doqSJMDK4QU8w\nHCGJUokgxbYtdKmx++KLSInQ+oZwa6vsHM4jDYOynkEmOik9bHa+kCFJEuJUbRC9Xvw+XwATJRui\n32dFuFOFEi+AV3UqeNSrNYqGw9UXXcr8mVmyA2WePbifPRddRd/kBPuffYK1tSrFfIFa99yq47qX\nUJlb5M1vfyvPP3OY6lqFN73pTRzYt5/a2io333Qz9z/4EErBza+9ldljx6h3Am7ZtZ0xIXAf20ft\n+WNsmtkME2V0CSI2iLo+tYUzpGmR3/rwh9j3zH5iPU9qNknjCEv/MeVblVIP8cPvNv/HqDxBTxxN\naYr+vnFSoRiZ2kOztoApfJychVdvIVo9v9jUDDqNOjGKktGPZRoYBjhhRBiGCCmw7HNhRsuVNc47\nbw+dbkSn3SLv2Hhul33PH2B6eitRGrOyutoTeNB11qpNGuuHSFNYW6vQbrfpK+Q4b/suVtpVjh49\nSqk4yNJ6g1a9Qao0TE1S9WOCoEOcJKBpTA9PU39kP6OGTjYIOXRoH53VDpvHttBccak15gm0TK/0\nRZlEKPJZk4JyWFytsy0axDJBpD1obCpTxiamiIVEBT6kGolmgpS0Qp9my2NobJB20ETaFs1GjZJp\n4Me98px6pUZp//NMFQvUT89jWDoWMdV6lbg0ysj0lRtrAiqN0DSJpiniNEHQSzz2qsR71eIIQUqP\n9KVLiRIKJRQo0ITO0eVZJjMOsRsyYGbwCgIaFYQQ3HjDDax5Xa67+WZUZZWOHzI4M81LY0JOLscV\nl1/K448+gcLguuuv4zvfuR0p4c1vvo1vfPObWFaGXVs389A991Jzu0wPjjE2Ncry4hz7jx/kwrFJ\nxP/8PJf9/geIlEIGHk/f9yi33/49HnjmJJ1gCDN/Cboy8RXYfZcglQf88PDxy1fuouIeo1CBZmQR\nG7jlMyeOIbrrZzPsnUYTBExunmFteQUtUWQLeQb6+ml0XDBBcxxqqxXc7rk+56bLLiPutslnNYpD\n/VQWlsnlc9z8xtfRjgRZKfA6LbLlPLFUDHghj97/GO16ryJgfNMEadDCtEMaJ2qUs2WaLZd6p4HQ\ndXKWTqGYZ6G6SE5ohAqc1CaqLGI3dAqqwJFnnkSYCYOFcY4eXSVOdCZkBmHqVFyNpmbjpV2sjuRg\nWmUosnj4iRNcdcluCiRIAUmsoWmC2PcI0xiBINV6escJCfvufxykpDg1wPDENHa5RCACMghaVR+J\nwlAabT8g1zdELe1QLvURrjdJpUmi9y4hCsXsmfke+k4YpELxQuxGSSBVpPQMQwFCCoIk7hHZ1Isn\nzdzcLFMze6l2O+SjDc3jMGG0b4Bvfv2rGANFMlmHN77rXZw6eYbyQD+fXfjS2XW74orLufPOO+kf\nGOKWW27lC1/4JxApv/De9/KJT3yM4eERdu/ezZ3fv5u+7CCXXv8q4rk59FbAsbUqTr7M6bDDptUq\nutIghk/+xR/ynW/sZ3Z1CSkUQbBIbMbY1gyR8lFWBo3/R9uK333ba8hkbfpLBXbtmOHyyy6lr6+f\nLUNlEldH6gaWYdKMAlI/oC9fwokVZgqa1OhW6iBbGLZFPVhgfOsMDe1c3zetVvE7bdYqFdqdNjtn\nNtFYXES1qswtrTFQ7keLFetzEbqV56lnDtL2AuxcActOOW9qjMZiwtL+Qxi+R6E4RLXSxEw0DBHj\nuR5J1MFtthgvjREGHhNWP1qtQWnRpdWZxdNyrFYq1JfnUANZ2q0GTijwwhBjYAfXv/2dnPrMRyn4\nLi0Zsc1w6bb7eexMjUvHs2SdLIqUOIpQukUSS4QySYSBxO4pxYeKkq7ROlVBtWCp3eDqV11PR/pY\nmsm26WnsqWEWqlXKiaAaxEz29dG/UzI4tRsDrWeAqudK6bpOEis0BElPgRy1YQxi4yIvhSBNUgzd\nOMtikUCUpjx34ghhtY0qFWiuVdB1kEJwwe7zuff+h/mL3/wMDz75KHd/726kbrD/wAHY+uK6ffnL\n/8L09AyvfvWt/PM/f5k0hdtuu42Pf/wT5HI5zj//fO644w5GRyfYOrGZB+65gxunJ6m7HrWuYnzP\npahShm71BCptEnd13v1rv8m9D38AqzKHLtpIGWwIww9CnKCpWf43Of8fGC+bsbzh/POwHYfpLRNs\n2jmJbmWJSUi8JlIpdCVQcYoRK3zPIzVNtDAATSPwfGxbw++45CydTJJy5pnnGBo9V7DC67SYP32K\noaFhCo5Nt9uhkC/ieh6dVoeRyWkymsXK8iJHDx0gkykiwwY7t20jjWp4ccDCeo3ztu1As3Pcfd8D\nuDHkSmW0NCH0WujdkIutcaKjFY4fOQbjU2zuH+HB5WOsNToY41PM5TJU/BCZQktTGKaGEbZRhiQJ\nBF1bUOwmTOtF5uIuO4g4td7icCbPJC6WZqClvUt6DD2RbWmQoBGYJo00pahZyBBEu8WoZfPovY8w\nMDXMFa+4nkaYkB8eora4Qqi18bsV5g6FNFou623Jzqu30uuMFjQaDVSqEBtCJ0LRm1HXiaLeyf3C\nCQKcJRQr1fMSpCbopBEXX3QxCo3czG6qrSpxFKK5ETMjQ5w4coirr7yaJ3mS6nqDXbv28lf1T59d\nt1JxgDe98S188YtfIEkSXvWqV/HVr34V27Z5wxvewBe/+EWmpzcxMjLJ3XffT3a0jFUcZSlTYuyi\na0iyOXI7dlDeeym6brFyapaBi6f5nd97Px/4j/+Jrt9Eymwvj6RlCFVALgUhfrS6y8tmLK7qcMMr\nbiCj6WSUTr3pk6qY4vAQXuCBLtBiRTlf4qkjxzCkRdeLCFRAGsekfoAXBSy6XebmZynmijgj51Yd\nrwYJo3suxjAlKklRcYwbuwxu2Uph+y6U0njqocc4PT+HkBYLp05xwZ4d+FGLq666gq7bZmr3btAt\n/ubjn0W3TLaft5UTR46RDWM8v80uZ4KlB/aRrFaYsC2qyzVONFt0gohte3ZTc310vcBY4hMbJt5w\nP2mcoq0pmisnefAfPsJ46hGkCYktWIwE2W6b4Y7Oieo6IskyXIKMrhCpIJE6qaGRCIWB4M7HnqBg\n9ORtZ42YrKuzGAe4mqR5eokoehjbgltvuJkt5+9GVFfJGH3ots5616NUGjwb8UpVyv4DzyGERip6\njhlaTyY1SZKzXMYXDEVskInPptlkL5wcJgnDE2PMP3mIRjlHUbPxlGS5WmFkZIwTJ08xV6uxa8du\nBoZdWq1zWyve+rY38+V/+RKmpXPlNVdy+/e+i5XNccWVl/H3X/hHpiZnGJmY5MF7H+SCyy9kZMs2\nnnryEB09oq88ybBjsF5b43CpxsXqKnJ9RUQKm2cmCYMOlrLxdYUgILJNRLSMJCBNzqUw/OB42Yzl\n6tveSloo0ZUxzTTAkylJGCLioEf29V3CZpviZJ6tl1+EUDCzZYooSgiiiKDdwk4lYRCx98KLQUoM\n24YHX5xjfGqK4kCRp+5/kK1j4wRJSD6boVtv0mw0ObT/OIEfUDQzJEJjxWsTug2cPpNH7rmHTCZH\nrdll//4j2Nks+WKBw0eO0O52cVId69AiK26VjJahvHUrx9dXmNq9i5MPPUjGj6ntPwqmIMxKMsLA\n8CPK9QZ5I8UPY/IqQaUxkVBo6PiuRzFfQCNk3CywEAQstnViA/o1Sd6wUVYva55KgTIMDp44znmy\nnwEnz8FOh7oOkdR7PT5Zi+Pzs2zt7+fX/uOv8Wef+jCNQpPc+AxF02Riu4VyJhCGAULgXQRUAAAg\nAElEQVQhleDE8VMEYUgqjV7kayNj/9LTBF4SRj77ugJErywG8JOI4WyRuW6bJE2Rpka5VGbf8SNc\n8Irr6Rsf5dSJU5w6fRohxTmZuW984xvk83luvPFGvvq1rzExMcmOnbv4xjdvZ9OmXWzZspW77rqH\nq258BUoTfO3r3+HCi6/nuquuYSBv871v/wsPfvsBnnziXn765tv43Pe+RpAm3P6tu0CmpJFLTISU\noKUaegJoCsT/o/0s/Zl+VCLIWRm6fhvH0XEGSiStDpptE1kpWUcRLS2hBKg4IbAsoo6Hnskxd3qO\nbLlM3+AgURACitnTcz8wS0xtaYm8JhHdDtm8ReS6LC+tcvLELGEIaap6eQiVksYhWzZtIgo7DA2N\nsrpS5cTRM6SJIJMr0nEDpG6BDPC8iL3mMKNjExyenePpM7OYU2Pc9+yz5KOIvK7RXl9jsK+fLRXF\nnGyjDfcRazFnoogop2EJHdEK0USv/spAMLZ5C6snD6N31kgaIZ3RQezAJiIltiJM3cTSTGJl8vTJ\nUwg0Ei3CcGPKymBVBAw5/az7LplSgUhJup0OohPyp+/9IO/+8Hvptmp0/TbCsilO5sgUxkiVQhOC\nzVs29+q+4JxT5IXxQvj4hfdfMJaNQwZJL3y8uLbEAJBxHLp+F6/TJpfJMjYxzqc+9Sl+4f2/guM4\n3HDD9URxzMcXPn52jvPPP5+JiQm+853vcMHe85menuEfP/95zr/gfMZGxrnzrnu5/vobqfstnnrs\nEG972zvpHxrgmUfv5rFHv0feCCiaBn/3t19m84UX0ai3kLbNx/78sySYJBL8SGJt5IZUEpEInX+l\n3vec8bIZi2nTo9ligg+ZUoaW2yQrBYmWIhMNO5fBX21hWTqRSEhjRbu6wlK9xd4rr0PP5vFbLXTL\nYG5uluGBoXPm8GsBucESrUhRNiyatSbHj5ykUfdQiYafeiRpQuAH6IZNrjzAvQ88jGXptLsdhLTx\nQoVu2iwur6GAgBiZQCMIaJuC4nCJsq4olRz6d+7kxHdX6LdLNNwWxvgwZ1pNbByyGPg1D83OccJt\nM6PnGBnqpxYvkgY9RomhS5YqNdZa62wmy57EYFkXeEGEZuo0Ep9MbCC1HpPmwecPYWPT11/G8GzG\nug4Hkxae7yMsk2i9hSU0QgFOqcApz+P2L32bbVecRzkLW7aO9Orw5EbEC0VtrY4mNcK0FyLmJYYB\nLxa+KICNkpcehqz3t6lSiDjhoeefZVd2mCNL8xSKOWLPI0kUQpe43Q6f+dSn+aVfeh9Hjx7D9Vx4\niYrVmdOnOXHiOG94/esIuj7f+PrXufWW1+D0Ffja17/GtddeiVIxB585ym2vv400aPJ3n/4M0mvS\nlzfouoJuO+VPPvrH3HrzTXz2U//Ipa+6gZk9l3D86PNEfoY4ikmSDkJIIi0iTiRCnduW/oPjZTOW\nzso6MmNxqrKOYdu0ltcI05BWHBDGIVI36NSqqDSl3W5hOg7V1TVsq8Tk5vOIzQJekFKttsiV82y+\n+go03YDvvTiHuamPXLHMFYO30G13Ofj9u7GLg1hpl067i4jBEDHlgRLn7djKyPAAs8urNJpNlhaW\nCaKQLBqV6jpWXicrM/hJzFgs6X92icGbXsl8u4UIQkqjI3gyYaJ/gFHPozBYpBb6YDosdlvMdjxm\nxjdj1tYZ0E3qzQ5Dhs2IIVkMDeoiwuoG+IsLzGSH0dseoypGWws4XhIkfUWyiU6o+TQSxVojpmQ5\nBH6HOBJ0tYRRHB6yFGOyhxW1NJtMIcdSfY1WVjE8Msahp9fITEyx883Xofc5DBRmaKsMSgk0afDI\no48jDAMVRGfvKcCGc9Vzx1LFOZGjF5j3aboBiJWC1bVl3vnaG5ECzGyWvNTwfJ9W5FLMZTlVW+X+\n++9natMkJOdu0r3nX8SmLVtYW1vl9tu/w+XXXENpaJDPf/7zXHrZVWRzg3z3zge49bW38vA932Jt\n9gjn75zBTfOcnF8mUL2wtqlgpVpjbOsWHn7oPv7gD9/HJ/7yb7j3+yfQpIGuN0lDC61wOYkRkU1D\nYP6H7tmXzVjyWydxhvoYVgK8CBwb5XcJwt5FX+oGke+jN0PWlpaorFe4/OorKGXzSCVxwzpz83NM\nTE9TtG3ckyfodIJz5ijEGp2FVfY/vZ92vYkbhpi6gaYEUimSyGVidJhivkAQuRw5fZy5+RWipKdA\nn6gUP4golUoYmoFSKX1OhlOPPM3oyCDPHtpHenoJGYWo8VH2PbOMXK5wMugSI5gamSQjLAI0hkbG\nCVSCHidMWBnODDmcWlthUyaHiEOk8pG6RRrGZIayVJpNxhwTvV0nUyqT5vuZ90KcAFSYcHxxGV0a\n2KZNwUuptJbZddWruX5slMWH78R2dPTUoK+/j/lGhTSMMAUUL7iQex54lvF8ltyU5NJXTiIy5Z4p\npCm2ZeHHCUopNK2ndpJu3E/OumAvNRQ4x1WTGyHoju9RGuonPhCQtnRavktoxsRAs+mSej5PPXg/\na2s7MR37HMGKO+78NjNbtuD7ITfdcguGZfO5L3yByy+/mh1bt/HPX/sK115zAV//+4+h63D1tVdy\n5Ohh1upNlNSIlYIowtI1nnrySW59/a34UUqjVuWjf/BbvPtnf4XnnjpGID2ENkIkGhT1MrbxrzZI\nvuSzvUzDSFKSk4t0DhxnfWGe+qmTpK0u6fI6mU6IUe9g1js01ytUqqvsOX8PhVwR2zZZWJ6n22py\n4d492IaO22qihT5TwyPnzJF6MYeePUi1UsMLYqS08cOeQmQY9bSINcui1W0yP7/AieNzdLsurhsQ\nxjFBGBKGIUEY4DabdFt1xOIK5zUjMqdXuLpvlCujLOPdiB2JzkTF5YJI5xX6EDuFzeryPEerc3hS\nMrB5hv7+AdyogxA9QrELJJrOthvfROmSm0E6lKWBpduUJiaYCwIGurApEAxFJjKUrHcTIqWRJClx\nklKWGiOWYHhijNmDz3PTpm2sr62yOrdALARaKY8bxoz3D+HWW+w/fpxyaZT5fWcoO33EcLaOLE4S\n6tUa0PPA0rTHs1f/yiVfKdW7s2y8/gLuQxO9gstQKrLFAlsnpinlHRLDQCofoRoUR0y0rCCwI3wS\nWs1zitj57d/6ILe85lauv+EGWu0W9z/wANs2b+Hqay7ny1/7IuuVeb7xlX9gqN/Btk0eevIplpoe\noTDwk4Q4jkiJCOIIUzf40O/+dy4+/zwa9QZSSv7mbz/KyACYWoLlgJZWCJrHefubL/6Re/blU3cx\nBc31NiXTwU57RWTh2jqmJWl266BJ1haWyBf72bl3D8uLy/QPDDJ3aoHJiRniMGXpzArK1Ak8D6/T\nIlk5NwT51FNPsbi8jOuG+GGCbjskKqZYzDE8MInfDmj4EQJBmIheB2DayxVk81lMx8QMFbqhI4Mu\n4+N9hI8e5qJEJ2q3SZKI7MAQ81oXlZWksYflhdhmyGTWIOfFxD58W0YsrCyjuSHl/n7W/RZZ12LT\nzl1Ujh9nYuJCzMGtnHz8QfpNjdZKhexQmTqwnNGYr1RYcV1Gdl+KF6wgg4gEjSiNIIkY1S1WbZvN\nK2s89z8+zohI6eg6zWaLh59+Dt3JEIcxieexeWqctVPHOJFxuKh1KYZVIBEmkCKkTtdtb5wgL/ki\nN8jQKk3PultncytpghTnPpGFUiSa4P5HH2I6W6Ytm/QXs4wWLS7bMUGz5bOwfQf7FxbYv9hC/UDX\n3uOPPk/Hd5mbP43XbjIwMMyb3/FWPvHJjzI7ewhT6DiayVI7IkwVqaaTJOB7XQxDAr3cj8BESslA\nocTPv/Od3H7H7Tz8yONce/01fPLvP8VtP/GLqNjGUQkq6nLTTZfCB3/4nn3ZjOWpR5/Ea3fR4xSl\naeiORRomSCGxC1mSIGR88wxkcrTbHbqdmEwmYnjzdoLAJTPch+MYBM0O/UMjZMam8JMYXsR80K1V\niVpdlJL0FQvouoHvu2Q08Bo1hNSIo4Qkigldj26rie3Y2KZOFHpowqSgNNI4IJ+36DTWGBodINMs\nUfFP0aq1obsO602SiWESXSLLDrbQMAoZ3G5AaKRMKZ+C0DnTruPmbIz8MIMTgxw5dIB+IVh85m6G\np5bIOQ5x2KKqAmhDt5Cl1e4w0t/PcEYnJyrUtQTXVWhKUkrAFhK3G9JfGmdVVCmmPttEhqNxwqCm\n2J9o7Nl6IWdO7sdOUtxOk8H8KGGnRWd1lbDZwnLGQEmkiNB1fcMYQAjZg6SycYqoXj1YL1qmEKQI\nLUElEoHshZpViiBFpfBPd36NP/3V32S9u0ZBeLzq0m2kUQclEmYMxaAxxAUjIyy2Wjzykr3xxuuv\n4x+/9W3abZftO3eTy5r87m9/gHanjo6GkuCphCDS0DRJ4nqgUiyj148jhIVpmpi27OWmkpS8k+N9\n7/lF/uhP/j/+8q8+xcDICP//H3+I//qrf4QloVAusN750Rf8l80N233xFVx50y3sueZqLrniUnbN\nbGXrjs0U+/JYQnDJJRdjA2q9TrNaZXp6kqzjEDYa6ClErS5po03J1KHdwKusIGqr58zhNltYuiRN\nAzRLkCYxhpAEnQBCSRwloCSaZiA1EzSDVEj8RCFjejiKxCWIOnjE7LJGqT52gkaljhn4ELSxNZBJ\ngplKckFKth2Rtnxa7Q5CNzB0B1/XKSWSEdNifX0VAo/5o8fJ6DqOYWEvPUf77s9SjDqkusIVgsUE\nzK3noQaGWO0EtFaaLO4/wYCfoCKfGAWGoKkCPFOnVa8S6QaplOR0nZyE/MAgtkxpLM0xNTzAQCaL\nqKyTcQMG7DJf+9L30BKNVEs2tKcVtXp941K/0b+y8dAXirN5lRd4jkopVAJSGL1NKuVGj4uGEpI0\nn+W3P/s/eGZhH6+8eIR8GKDFBv2mzVA2y47hYa6ctInTc9uK/XqDd73jLUyP9nHw4LN84Qufo9Ht\nEqMTC50gAaSFrukkSQ+fbhj6RrObTi6XxbZMVJyQRiEqiRCpolmt88d/9BG8VpfnnjjOFZfs4j+8\n5zY0mefNt72Ru+6450fu2ZftZNHnVzByWTRHp9lsUDYzLM/OUXc7XHbhZayvrrG2ukbf0DATExME\nQUASx6AUgRtj6XlWaxUmRkbIlct4nQ7t1rm+73oMYRKCEiwtV9FUimHauG7QQ1GLBNJe3ZNKU4Sm\nE8QpaRzjph6Gk4U4YGa4xMjIMO3jbcJ8HwdOrOBYiiV8wKVhJZhxh8N2ipfGzCQaHaUwlIHdjYjt\nkFKxyPz8GaazGVaXKgwXc1jFLPm+Io2FZZxYRwkbjJiB4SJxqpOQEGgKwoBJKbFDhbe0gixnUWmC\nSEGbGOeRlSZ9eoQdx4xYOazIJ4fA73gMGwKnVYXAJJvJoesJvooIDZ2FVoKRLxKTAL0L/ejoGI3W\nLH4Qno0cv+ByvVDaIui5aYZu4mQMVKqjiIijnkB6ioQkJVYQonhydoET3ZjLRsuUkxghIOr6mHZK\nWFfkxblI7be9/+d5/dvfxCNP3E8UJCiVoKDXyaiZIDQiJHEYoOuCJIxJ0xjLsnA2NMiiOESmMVKB\nJiQqSUBITh0/RZomjG2+kL/8zKf5r7/yazz50B288Y1v4k/+/E9/9J79v7j//11DK9kkUuCGXRxN\n59j+QyjHIGs5LC0t4/o+ZjZLy+2yuLKK0CSaLqnWqrQ7HcanN2E6GU4320xoGdpuzPP7Tpwzhypn\nkZFDhgxGyoY8kInpxL1+mtBlfXWZKIyxnZ5sjjSNXlGgaSIdjen+MYaA0/uOsm3v1Tz87bu4RDPp\nHxnnittuxjuzRmXuDP5AH/OaoHbgGH63QWTk6TQalDIlsonOQrtKuZgjiWO6WkygEprtFtJrQ+Rh\nCBtBl5xyaHRcgjim2q6hpQljEjKxj5UqKlpCpDIYqUYgFCPbdvPI0qMMbd+Kd/AEUZKiRRHFnMHC\nyiq6obBSRZgmhKQkOiyGbcZGt5O48wRh1CuKRFKrrnHk8GGiVN8oZXkxXCw1yejoKELA4uLyxlGT\n9sLFSYTUFGojwCyUREsVEkmqUgI9z2986kt88j//PDNOigSkluJ1XZTu8/YLd/A7L7kj2UWHx++5\nF8sU+LGHNEzCVENJA032BAhV0uOIpqqnG21ZGYrFPPVGA00KhEiR9BKtIlW9xLZSjE5OILUMC6cP\n0+4Uueeeh/nIn36IM0cP818+8Gv8wxe++UP37MtmLI8dO8J5m7djxxrVWo1dl1yCR0wsBLqVIQkS\nGtV1mu1l9l5/FdFag3ptiUJmlEwmD1GK3/HIl8ucPHyUhflFVM6Al5zoy+ESQ8YwMkhQCeiaIokD\nUhSpruH7CVHikyYJvpegaRqel2CZOk5GxzEkfaUS+5/dR9wR3PtPt1OuBAwMlTFLDm2/yfqhE6x5\n6wRhg+nNm/A7Pv2dEUJd4U9Mc/+z+6hYFlmRIOKYlgq4YGyK1cY6jRQy3YScEqgkQkMjY+jU3BZG\nvoQeaYi0QyI0qmGMrTk0pUZs2iQiJFGKJ++4i1uufgV3Pfow1+w5H+3EMom0MP4Xc+8dZdlVnfv+\n1tp5n1w5dk6SWt0tlHMCkSyRg4yzwcA1F4zt966vMzb4YjDYBgw2XGwTLtEmCWGQEJIIyp1bnbur\nunKdqjr5nJ33en+cakkl23q+5g95jlFjVO1TVeuMs9fcc805v/l9xa00OI6RKmoqwo8Sdm/exsFH\nf0Lv0Ci12hJbjBLNuQqZkXEm5iP2//Bx0kgQ4wMmnI8qKAxTZ6W6jK7raIZOHCuEloKeIpRCSh0Q\nSCEQJCihSEnQpIA0xjNd3v2hz7Bx0OIXXvczlNKUYaVIHZdGfQmeQfkmXAERVIOQRKQkqQTDRdJ9\nkKkkAZUghMTSTSxdQ5eC5ZUFspkMUeBhPjXNqTAMGz+JWL95I9lsFjfj8uThU5SXF/lc9VO86y1v\npen5OG2e0543Z7nsij2ce/QQI/kSeUNx5sDjeL5HfmgAw7Do1JqYhk5OT5C+h7+yRFZFuLkelssr\ndIIIkQpOnDhMLfKwcjkW22vZ2OcXGoicg+k4aIZCodHWYo6XJ/FJcNpgpAF6qiHiGCUikjjCjG2a\nQRtVNzm32MRODAYWq1xkWgwowRMTh9m8lKNgJkTHThPKkEoUoja0mD53Bt3MoTs6Qlhcvfki7mvU\nmTNs8CJKGZOFmTI1SzGTlaQtjzFpYEkNI46RKiRXLNFMwBYRqZYwFXTIGSbj+V7GR4rMB22CVBAm\nCVmh4S0ssjnXw6mZScbDDmzYxo4/+jBf+OUbWV8oMVbsgVgyW6nguAV67CJOKYe7ssLkE/vR8xto\ni16m5xtEadplmRSrWK8kWS0jd/UZn4kFOw/nT9PzEaZbPj7fjznf0ESlJEpSt/Lsr6Qc+9TdSK/G\nQMGht1jgZ669cY2zVCoNpF2krXR0zQbdRkgDIXW6xBMgVYzrGGQzNp1mg9D3cU0Diy72U8UhQulo\nmk4q4YIdF2E5NkEQcOjgITwvQNUC9k+dwX33OxjcfDmf+fx3n3PPPm/Osjx5msHRfsJmh0alSaZQ\nYN2WLVTbTVSaUBjoI4pDND0lrlVxChnay23Ozi4yv7hIJ4hZmFuiFXoEQtBJEpppHfJPrxEFMR0j\n5lxzESljhFJ4WsRKp0Jiacw3PLaNjdGn9xA1PQxHxw8DDKlRLORJwgChYl5w0U5m/uobZNY7FPtL\nnKu0mW+FFIOYQiaDyOWIlytkswVOhx6asgiCOiLKMFdtss42SQpbCDdIVmbOoLsGvcMDzNXKDKzf\nwMLMJDk3Rxp55KMEx7JJIwOts0Jk5vFMndG+EnF9GWUUyUmLVuSRailSKlrNKkngY5QEHSkRdZ9d\nmYhLRzbjhy3i2EP5Cd6Kh6PDzNRpNhcvxg9ivvTZz/HzL349YZpQqS8iZIqBQ0KymuB3I0t3pkU+\n1V8BiZSgklVHSdWqvHq6SifWLTFrWjfZl6mH0BUygTgGLd/HZJzSaGu03LX9sUwmQxhrSMtF6HaX\n6T5SoARKpmhaV/dF11Ma9QqSiIxrkPgJYbuJYUgcyyHwIzTLYv227RiOTXlhgdnZecLAIwFyIuWW\nm27iwL7HuPa6K/naZ//uOffs81YNazYSHn/8cQ5NTHCqUePYygpPzE0zFfqcWWpSERadviLahq14\n+RIiXyKyu+U9xy6x0O7Q7LFYMTucCVdIcia5Ut+aNRSCoF5DUwFBu0WatJlfnGJpdprdWzazXK8x\n36xwYPEUM1GFpojpxAGapuEHIYVclqt3X4BtJJheRN1POasH3DK8kUVT8cChU8y1G4Reh7xhgh6S\nGeilrAIiKQg6HpplUEwSWkcPs1JbYGRgnNJgP4lKiaoNtFRgjJRQhsMVH/0Hyk4f9nyHRHjkB/uQ\nqeSGa17IbL1OEsXoaGSdHBk9i+NDIhR+zsTRJTKNqeuSnlaNb73hdfSW52k2l1FBgpGmxH6L0Otg\nFfPE2TzmwCDLcUysuqSHrUb7qc9NSYnQJEp2CSo0TUNKgVIJECPE081JXdeRsqtaJoToEpGILnXs\neTCmFAZpFJOIhFgmREGMFiviMMJP104oZq08QZRgCA0THaUMTCFxdIGja7iahp7E+K0WIklJg5jQ\n80nSEMPWyWYcOqGP3TfABXsuIQVOP3mcc1PTtIIOXhhgI9Gl5FWvfAWPPXGQk2fOYYr/opOSRCGX\nXnYFbqmE1wkxUvBVyMDWHdROTZAkEkuBV24Tpg2OnjxF6odUvTbLi2VqIqLlgMjbtIMa5XYZLQKe\nwapjKB3LskijiMD3aUcR2y6+kOYJnzOTJ7ju+l2cOzuJlCZe6tFsLpPRDDpejeFcDzt3beDw2TOo\neZ/BsWFqtkHp4l1s0CxuGCvw7TMTHKl0GI4Mcl6KMVdmPNdHZ2UWNwGZpNiFIst+m2wY4wvR5UhT\nEXZfAcc28UJFbJTwCqMs5Ddx1S/cyT2f/gThUpVNQxfwqD9LfW4G4eaptyS9uSwIHdP0yWkmyhRM\nLq1QMIoMmTrKn0eZOoZuktUEutDA0EhjhZ3NUBCCXLaP6olTbO1ErBvdSOLHREpHSoM0ASWfpoGT\nq4l+F1n8zE6lQohunnL+yNU9oqVPIZDPmwRUukp+oeiivFdfu+mGG+gbGIBntDjGB0ZwnDYr1Qap\n5iKFxLAsRNKGtNMlWiRG0ySB52Np3VzJcW00TaPpe/SOjLFu8w6mpqaYOneuSwKvUqRK6bctlITZ\n+UnWbxrhFa98OR/+q08QxM/NSPm8RZZN60eRQGepSlyt0pifJ6zViMplOp0Gse4TNFqsLJQ5c/w0\n1ZUG5eUq5cYSy0mTildncaXMxOwigRdS8+p0Ym/NGjJMadfb1KoNaq025WqdRw/sQ3MMXv7qO6g3\nyxiWTprGaLritluuxdRStl24gc3r+7BbLSzbIHtgnm0XbiMWClEqcs9DP0EdOcaIVOTWDbOUpqiR\nIr5pUdq6kWlbZz6OWI4DltIOYRjhxpCGMc5gDypNSBGYuoGMOwy1mtTLxxg9e4Qnv/wNhpMUW5M0\ntYQ+x6B97jRJo8PIxotoJ4K2SvDTiJYKkZok9RU1kbJxdBsq0RAKoiTCQYISBElEPWqSttok1SYZ\nJbACH2kqzPoKy+cmIFWs1KpoUqDrXUzYU3St6ukJwvPlY+gKzj4zeliW1VVse8asS5chRq1+3/1C\ndQ92hmly2eVX4Jj2mvv2tre8BV1FBKEHEkxToesRKq5D0kKXAWHkE4Y+SiRIHfL5LIqEVsujNDSK\n7mTZu28vp06e7HIKkCLjhF7bZePoKLVmlTf+3M/ypa99jU1bNzE1MfX/N1X8/EWWRn0FTdpUai2I\nE/p6SywvLzM/O4sczJOWbKbPzHP46GHKS0tdeHwQMFevstCsIG2Lnr5+qtUmXqvefRrG0RpAnpGY\niCgiSgVC6iSBYqlapqRnuOfuuyiXO/T39OIlbbZfuJGl+Um2DvWyWJnj4fkVRn1FXMhz1eICsx2f\nuYUptMQnKle4IVNkKIwxdwxzeGqReKbMRNFkZ884auAUuVaAE/g0ChZKNxhMHRb0CGughLNSpVpd\nwTBNOlpCfwi2DFl43++iy5DQb5OxTbSmh0h1ehBY+RwLZ05gD/ajFx1ioaCQobFc4YKBEc5Vlzm5\nPEV/bw+p76OloEUK3VSkUiKlJBsr9DDEbzZxNZsyDYbaHvvf/0Fu/cQ/0qzXkbpBnHaZXJRQpMna\nSCFWsV9qdcOfjypJ0iWt61K8rp2HkfKZE5VPY8k8z6NvYJCJo6fgGcQ8X/vnr7JlyyZ6B3o5eOA4\ndn8vkVfBkilSd/E8jyhOSYkxTZNSMU+r3ULFCabrUKt3qDXKJIFHtAoKBUVGE2QLLvsmnqQRpLz2\nNW/kxz/+Ed/+1g+wdIeE5x7+et4iS9i/gZmVOmPbN7LtpmtJexw2vfhKRq+9GHs8R2IE2Otttt94\nMaUdg/zG+/8Hw1dt4JZX3sSwW0JrKxYW5lkul6kvVAnbMUm8Nv7PR3VO1edoRnWEimi124yMjiJS\ni8ceOk7cgqXFKko3OHLmBEZ/LwfPniURklaYwMh6OuUWvct1MsemuLEu2dSO6NdN9JaHLRPq9x9g\n3a2X0lcw2SwEs3ffxXB5ieV2jam2j7cSoefzNM2UnK7TjFKCnhLS1LGNLMU6+IYijVKCdodCs82C\nVJTSDLopIQjJGwaqVmHQdClmC3RaAb4XkqYagWNxqjbHFVqRvukFwqyNGaUEIiYkZEqmXcFTkRKr\nhJZhcDby2Bc1qC54HE3r1JZnOHXwEMVMpqsTuYqtkkg0IZEIdK3Ld4xST40hC7oOEMfxqmMkWHaX\n7lWpBCGeURGjm/Rrmo4QGkppBFHM/MIy6lm5wu23v4zrLr8C6fmMFB20Tg3bMIjQqNRrhElMnCY4\nUpC1DKq1Bk0/ItAcqo2Q5aVlOu0WQXz+fabkBVy4bQszK8tEiUatsYKbtbnjjrutv1IAACAASURB\nVJfxmc/+M5Ho4PNfVFNShnWmZ87Ql9V44Ft3EfRnSSZMau0OK+UV+gYHSKVg7tgimqXx3fu+yfBY\ngbNnZ9iwfQjvyTO0Kw0ybpGVZhtNExCtfTI0y3O0OjGBHuKYGrV2Ha/h8/f/+Gle87rX4fl1itkC\no2ODTE4c5+oX3cR8vcGBM8fYfuF2HrrvJ1yu97JpZJyF0xN4vQWMUj/NdRtIF5cYLmaxGlUm952g\nIEy8hSWKyqSCxqJQVHtzHGxWKMzUySloC8m5+UV2b9nJYrNF6GRZIkRGijEgNSRJqtGQgmYaEaYh\nKyLEDtr4GZNCf4l2vUraqqK7Du24O3NiJDoTfpXLxjajRoaIK1WcwOK0CtE1i/l6m1Kc0NM/hGvZ\nbNl+ISfmzrCh3yXMBxTy6xjbsJF7vjaDlBJDF8SRQj6Vo8g1MPyn4fqrDUupoT+D0EJKSJ86ua2d\ntjwPwjRNiedF+H6KZWa602yrds8932FxcZGbbr6Br/3zN1lYWsTNurT97ghGHMdYloFu6ni+TyeI\niZTED7pc0XHcpYtSohsNpFKMjo1y7MwZEl1ybmKGu77wj9z2ytt54Id7UfYo0hpExR4w9+/v2f/c\nVv/prVOvk82VeGzfQRbbFU7NTfH1r3+Hx3+yHyUcvvylbzI3tcyxk+c4cuwUn//SP9HsRIxv2ELP\n5kFue82LiTWNZr2GmbVIVEDGXUuy97LX3khMQL3ZZKVWZ3hsgOGxfk6cPcaFF2/nL/7yj/CDGknq\nMzg6yh996M+4b//DGMUs04vz9KwbZSjQSGptjFKOpJChNb/EfKNCrVKnvVCh30sJZpaRm8c5o4cc\n6U15wvI5otqExQybL9xJqb+fbUOjXL5uI3vGRzFDn2L/KA0/oGfrFvIj4ygEMXE3x9B1VsyUnq0b\n6Vm3nnxvH8PrNuAMDaLCNoVmCyNO8aQkRSMgZS4NkSs1wqNnqNgW4S03Y+zejR1KjKxDT/8IPoJq\nrcrUqeN0FsucWphi8thp1m/aBoZOmCQM9PSRtVx0rYsPOx8ZhFCrFTH5dM6yihYWoiuTcb4C1s1d\nnj56ydVjoHyqqtYtK5umBUrj2c9sy3S54fpbmDgzzVVXXkcuV6TdCpFCR9c1DMNAKYEfJbS9kFh1\nSQDPjxQ89f7SbrvItR0Wq1UaYcThJ5/k2LFTjA8O8vhjj3Lri16EF9fRTQPhr815n23Pm7OcOnKK\nkycmmV+sMVWts1DrkLP7mDm1yP3ff4ixsc08/tghmp02hulCmuN733mMQ4eP85mv/RP7Tp+hmiga\n9TqOazE00sdv/ua71qzxM3fewk0vuxJEhCkDZsuzSEvw4EP3kxJy6vQRNm9Zz/LyIo1mleWlKkGz\nxcTjhzg3OcHs0gLZWCLSlNl2QG79KMFymb7eAhN6ypFohdOuzzFH8UjcIcwXGOkZ4UV7ruEFG7Zx\n8aYLGOjtw2u1kU2PeH6JEdtiaWWO9Ru30NfTy7nJaYI4RSqJpsASBrpmEkmN+3/yY+aXlvDaPnMz\ns0zMTFPKZdjQ00PqB7TjGNvMEGsaqphlryyTyhg7kOy6/uXsP3MS0+uQBm0ay2WiRp1sGJELA4Zc\ngzBuMWiYLBw+hheHNNtNoiDA0W1yuSz5Qu6pxPy8pWnapUpavfTMmfzzPxuG8VSC/2w7f80wDHRd\nwzJtHGctNmxxocmpk9PcfPNLmJ6e4o7bX4Fpmqt/L58qGjTbHlEKUdLt8yCezoskYGg6jmlhaAZS\nN/n+Aw+imw77Dx/GwKFU6CdNPd72c7fyF3/2Vr71pb/iuew5nUUIYQshHhVCHBBCHBVC/K/V6z1C\niHuFECeFEPcIIYrP+Jv/KYQ4JYQ4LoS47d/732GnTm/OIpd1GSz24i+uUKvXiE1JpVal2ajT21Mk\nVh0yWZ18r4sXNnn4h48ykBlBBIKk08IxLXRd8UtvvgNrZO08ywOP3ssvvvnnGNsyTHYox52/9hIy\njsXvv/13uPrqPWiOZM+uy7l0z05UDAvTZXRpoDsmBbfES6+/Dncow+FalZWkw8LKMtN+i5NFjX02\naNk+uHQHN227mJ3jm9l9w3VoIqbR7LBSq7J88iSV8jJzzTrznTae30IszjEuBNH0FFarwXrTpB22\nMA0bqWxqliDnw3LqkzXyiFix0m6TC0Li6VPozRUs18Z2DLw4xin2YUqLtOMz2U5oj+SIRcS3PvDb\nZCIPu2gxIE0MqWFo3bwhSGKEzFPXYEuhn2BxmqxwcLQUKW3QEkQUoicBgyWXQlYnSbszLVLTkCLF\nkKDRzWvS1alSqWurnMkRhtlN4p+Z1D9zeCyOUzRNZ7A/x6mza9UP7njlS2l7de69/7vcdOstRFHI\ndddeiyQliD1CFeIlwaqIhIYmJLrsEgWej3yGrmFqilJfLz6CK6++lbu+dhe6itncm6fiL3H68BE+\n+K7fJjtf5tg//xPf+8Tf/uedRSnlAzcrpfYAu4CbhRDX8VNK5AE0lms0luv4Kw1EJ2KsZwhHWGTs\nPELZnDk9RaFQor9nkKNHTjF9dp6ps/Po0qXdTNm/7yjZQg+hCbVmi6Mnz5HN5desYRo9/MMXPs+m\nPRvo3Vjk0ceOsbhSY++hfWzevImlhRrHTx1j9+6LedUdt1PM91BdarJ75x7KExNMTpzGSTX6du3g\nlJnQrHUY1HO84MKdXHrjNYxv2MBIoZd6q8qRfY8h44jq/BILE5PE9TrxShWz7ZNx84SahjAMZApW\no4E5O8OwoWPrJrkgoZkmqD1XMTPWh6trbE51RBog44hpKkRegxcoh8rCCnNLZUgjHA2WF2bQBaRC\nYDkWc9U6QQB2OyDw6tS9FpECpWsopQikotKo0w5CCm4P5baP8H00r44pJXGiKPTkGRofQ5oWqRAY\nlo0U4Ng2ru2QdTMUcjk0BIaQXWQvktAPIOnmJJZlr0YPfQ1Mpjsn0xVF6nQ6hEFAPr/2vh09epTr\nr7+eKEw5eOA4e3ZfSRxqvP2t78TRXSIvQaI/daRb3aurxOQGjuOybuN6Pv7JjxGrhI99/ONcdtUl\nWOYAH//QZ2nVJ1lX7Gf50QP0N2PcSgen0kY01hLL/185y+qbON+pMekq21TpSuR9ZvX6Z4BXrn7/\nlESeUmoSOC+R968sTFK8MCA2dWbCDpPLS5wrl1mq1DEMHdPQufnG63j9q2/HdTSkVPTku/otSRJ1\nP2gvxLB0ejJ5sjJLwRpeu4gPv/fWd9M3UODgk4dp1D1kbNKJYkhsRgeHME1JpxJw+cWX8b4/+mO+\n8vkvcuFFW3nhNddyycUXk0xW0HSbvhfs4WC9zEyjSnlqmnyuQKvRoHl2nsrZcyRnp3GaIXYiybgm\nlmngahIZROScLJgGCRIvSlGWg6EpvDimEoZkE0kto9P3q2/lllf9AipVuCQUDBNPg0A3qQtFhYTY\nsJFSRzY7pI0qMgrQUCgpSYOQUwsz1B0NbNAsiZKKjoRWGGGaJppr4WRdNu28gP6RcfKbNrHjwh1U\nZycIooBcoURjeZlCtgToRLHC90MKbhaZdimpHMtCk7JbVk4VhqZjGToiVehSe6pS1oXzP11mfnoe\nRqDrBq6boVKtMD+/Nqk++uST7Nu7l6uuvpIg7HD8xBF6+/LsfeIx3vHf/jsZO4NK1WplrYtPy2az\nlHp6GB8fZ9v2rdz5hju59vqbec97P8i3v/0gU1M1NmwcJfRr9OT7eeBzd2HWW0QrS8wuLSNViuOs\n7fc82/4j+iwS2AdsBj6hlHpSCPFTSeQB1JptIiFoRm2mwgbCsbFsi7jSAVfjVXfegSyEvPDyF/Oy\n227j45/6Bx78yUPoThaRwNj4CPsOHaDHKaCbOl/40ud4xRtesmYNL6jRpMET+x/hW1/+Ht/8wSc5\nfWKRA8cfpJAv4JouF2/ZyYc/9jGuv/kGDhzai6VJin0Fpo8eZ0fjEtZVO8xXz5C9dDtbrriK5Uf2\n0Tl0inXX9tNYqNDK2qw3cghXks92hWBzQhBmi6h2C0ybvmIvC1NnkEGClgY4vSWkkWd8+ya+v28v\nIozolRYX1CM++bFPkeoaoSawlY4RBQxi05PL0ghCcn5A2m6ipwmDmqKRprQiH1s3sc0C7XaNVrOB\ndCW+F4JhIB2JtGx0TUczDarlCieOn6Q+O828SOjPFQmShCSImQ9bFHSbickJwtVk2XHzuLqg2Wmj\n613hIil0xGp+4qSKgYEBavUG1WYdLwmQukmukKfRaAExrE5YplGIlGBZOpZrQhpQW16ATU/ft56c\nzcL8NE88ErJjxybiJOHxQyd50xt+Fr/Z4tI9e3jyxJM0PB9DN3BdF8dxGB0dJY4TUBonTk5w+uwM\nJhqbNl7AwIZRnvjRd3jNa25hONfPZ3/yQXRbMbtSQaUaOy/aSJ+7NsL9XzuL6rZv9wghCsD3hBA3\nP+t1JdbiIP7Vv/i3Lt7z8GHQJH4aY+Z13IECptBRtsYb3/ImzqycoBKYfOCj7+PWF97C6994O6Hu\nc/LULKePHmd04zp2X7GL3kKeRmeFa3Zs4WN//+dr1qg2A776jX9hbHwTj5z5Efd+/2GaVZN9B/fx\ntrf9Fvd881+49NpLeM8f/y6dJKRSWWRlZZZms8OfvPfPCcseyb5vkSnPMjEzQ+wJwmaHogThGiQJ\nKFNStGwqmmRhsYyjm4gwpiMlabtBVegMDK8jzDkUDI2+jIE50s/RekCaQJuEKdVhXSy49z3vZDxt\nMmcqhBeR1VN6pcRNFG61RSIFCoFITDQhyAgBlg6WhYwhlgK9MIDV9jnS6TBg57GEJBtrxJGPaMak\ngWRAagxpFiuWzSZDo08YTE6dYczUOetk0BKTOGyQtXKrDceISCXEiVot++okaYxmGgRhyHDfAPVa\nDTeTJUkTtFAgooCsCtg01k+PZZExbWzTIgw9NJVAEgMxk/d9ldFntc7DlTl2X7ybTRdcwOimDWRz\nBc5NzfPtu+5CtzTyg3lefemrOPrkSU6cOI1jZ3DdbJfBpVbj5S9/FbfedisZt4efPPgNAqdAeLRG\nJvEJow6//dY3s2VsjANHjyBsk1AKfjxxFp6VOz3b/sN9FqVUXQhxN3ApP6VEHsDNl2zHsW0q7Sae\nFByfn6OTauT6CrzothswD3anC295+5U8+OCjHJ88Ro0Zjp8+wq233sCxsyfJDxQQeUWtskLTk2zZ\nuAWegdL/4t/fRdbN8/Lbb2c4U+IPf+/9vOcP3g96H9/60j20vBV+5Y5f5eNf+V8cOnSUm264je//\nyzILs1V+4XVv5ZeveyF39PTQjGaR1SYZ08Yq5UnbdZaOTxJkdYY8RTXyaKQwt7CAzLo4YUJ2oA/D\nEsS5AoWNY6jp02R68sSdLjvjUq1Gp9Wm1EqRuk4n8UHzSFKD1I9Ypzm4SUyGBDNWxKnAkJJSKvCJ\nqFsKP04h0pGej60MJDH5jMuBVo3IsQhUCyPR8cIOfT09UKlhCQsDjU55miQMqCqToi7JVcqs780x\n3+jQNgNMTQAJKlEgFHEKlu2SpCnxqmYLKdhujmrgYeiQBG3yhmRdYZCirWMmXQUzmcakSZ3El+RN\np3tEtTPEcYBp2sThWkLuzz1wL4lmEKHQk5Tjh57k8ksu4k2/9PM4RRehaaQI4lDx7nf/Jnv37kfT\nJKlu0jMywOjmHoZGMtzz7W+jHJ0N4yX8Rx/jjjtfiic8XrJrD99+6CFu33MpBUPgWzHtyMewNH5w\n/D+pzyKE6ANipVRNCOEALwLew08pkQcQkNIK2nRMxWK1AbqF7wX05TJ88TtfodycpjrdJnv7z/Ca\nm3+OU+Wz8KTD9p0xmbzD6++8nUcP72VmepHqXIsjyx6nDy4+fSAElC/wkhZZN6JQENzzve/x3j/8\nXQ5NPsZHP/hloqDFb/3p/8vrX/tavv6VBzj86KfRsTB1B60pGD/dJkhiJl2Jr+n40qQWrBCHPtml\nJfxhh9q5BlGvg2u52MUcA+PjiMlZqq6DqUKmfcXSzBLlRoWjSxXypqQ/kmxqh8ykHTBlt8okbAKp\nQMbIUEMQoyUKRDeR1U2TdhIx+MY3ceRf7sZpNxFG1G0MxtC2JL5hIoOYtq2TAKGUiKQrRYhhkUgT\nTZkkSUo9aSMHh5mIIhqex7hp0qN07rzhBlTR4Z8e/BEISaxSJDoRqx15IVBpBKSkiSJFIBPI2Q79\njkWWGIEiCkP0fBHNsFCro8QJihU/QHgpWqeNoWvEvkcQru3gJ7oJSmEKgZKCi/bsYvvWrSwvTuNH\nFrliL7qVQWqKj3z0w+zbe5B3vfPdXH/ppRQKLicOHeTO172KX3rL66lUW5iNFsf9BWIr4IG//TzD\neobXXnMFiddBiZSiYSHiGBH/61L3f9hZgGHgM6t5i6QrwHrfqlzef1oiD8BxbVQa06g1SZOYIOiQ\nxhErS2VMcw8X7b6I6fwyH/3i5/iZyiKNVsREeYneQZP+kTxhGHDj5ddxvHAQ6QtkkBI+SyZPCQ3d\n1fjx/sfZdckuRoaHeXLucb7wtf/D33zww3z4kx/g3LkJDhw+TCHbwyW71hPHIQ/f/wSlnl4u71nP\n9P79rL/kEozBIvH0El7icyqukg8jBne/gCE1S9Bjka8lzJgxbqnI3EMHOUELpxVQIcaOQgaEhW2m\n7Ai68nUiEuQyWWaNlKlAYpYG0JZOEUaC2LDJEhLrOmEssFRKR8Vc8cbXs7DrJnaWCpz85N+hBTqm\n8JjVTK54129hBh3u/ZtPEJo6LhopGiYpkUyZT0J0BZ0wJDE0NMumlMliFXJYS1VymTya6fP4E9/l\nt3/nD/naQw8TewEK2Y0kSdxV/EpTNN1ACNWtwiWKjJNQsiS6UlQjgWna9Pb10GjUEGFAo94Gma4y\nr0ikUhhSYqY6nbRNxsysuW9asiqYtIpSVqSkpkaq68R+THVxiUSVcUt9IKCvr4dP/e9PcOjAflxD\nY8eOl9JcXqbY10u+N0umYNJz8XqylkuvlcMWGpVGhRNz5/DaIZfu3o2KIpT2UziLUuow8K+Yx5RS\nFX4KiTwAPU7IWxZuqoGuEzsGsYJOq0Z9pcLRs7PccsOLmDtd47vfe4A3/9rPU25X2Ta6kct3X8Yf\n/Okf8Y63/DrX7biNb339TxgfH2Npbq7r3qv2ujtfxg8ff5TJySk+/YUvMDZUwu6xEKHBn//NX7O4\nWOWFt93EuelpLrnsIjZv28wP7/8hUsUMuxnmDh+mV9c58PDDvODtv8p9P/gJQlPcdNttTM0tc/zh\nJxg2ejk3PYdfrnE4aZFKSae+SCljMIrBej+lGisGiv00luaRSJZFTD5jY7o2+XyJHZfeytDWyznw\n4XeTcqorr00eM9JJRQcVJ9hS48Evf51hvUTZKLHtze+iqeuYqcZAR7BUsehrVLDWjTK2qcgT9z/U\nBY+mEegG+UIR18mRUTqJY0AQPqUBqdKESAn6Bsc4d2Ivp4+e5R2/8Q6+8MnP0vZ8/DQmjCAMQ7LZ\nzCpETKDHAbZhMFbIkiUmigIaSUKYxrTLZSK/QxxHSM3owuq1LmpZ1w02btjI3Mw0Qdjmt//wf/D3\nBx986r7VFhdJJNi2i+VmECgsw2JgcBhDpERJgtR0pOl0dTBz3THL7RduJPYC0LvATmGaqEihNA2j\nb4ChrTu588+2sTQzz73f+Dr6coNixmbsZS9l4onH6Zza/+xtusaetw6+pgSpF1Jys/TYLpYUDPQV\nsCyT/fsOoQUWJ584Q3limdnpFX780KPUF+e4994f8JkvfJ6B/mG+8rWvUByUXH/ZHl5+241YeXPN\nGhfs3I5hxGSyOXRp8sIbbmJ0aJA3vfEX+eVX/wrr1m3hm9/8FrsuvJBXv+pVHD1xHK8Rc+0VV7BJ\nOmTbHmajQX+c8MAXv4jRadOsVWjPlVk5O8kVWi8zy4uMlJts8FOuFC57EpPLsr1c1BIk+LiWThw0\n6R8fouVKorxNX2LQq1uIlkerWiaqLJL1a1hpGz3pJwkdNr75Txh79x/QFopQ1xEipQdBvtKgkB3h\n2Lkmlpal4wva+SJuTvC9h76Hn9NoZxSRBUqTFIs9ZE0XFQSEXpNWs0alvEB9fo6oWiHtdPBadby5\nBaZPzyB8g20jm3jVK1/Jhs3rkELhWiamaVIsFmk0GoSeRxrHSKnRkwkpGAmJCkmkQogUPwrohG0i\nFRKlIUiF7VgYElxbozTQz85LrwSp2DA2gp1dC1OyTJ2saaACn8bSEgszM8ycnSAIIlqxQrMzaKbN\neRiBkHJVtk+j3el0CSzcDEopHF0jigKG1m8E20IfHmTo8l286Y9/n4tf+GImO4oTtZhfef8H6fSs\nndh8tj1vzuInEX4cEngeyvcoullyGZeBwR5edMuLWZhe4dDeg8xNTuBmLSYmppk4PoVGkQP7T9Ko\npZybavChT36Fr375e/zJ73+QrDm4Zo177/8On/rrTzMw0MsNN1/NX/zlR2k2Orzwymu46brLuOnm\ni7j+hsvRbY0//cD7yOYLHD12nMP79rJrbBtxJSAjsxREnkFNQNRhqKeXxuICnaU57Jl5qv4SI1HK\nVmExGKSsLM8zpyUI2yF1M/TaeYTnEdfrpPUWqRcSq5h2klDTUxZtjenH7+fcR98Hnkfd1FB0yOQH\niJbnSRITXxd4MqYqY7QdL6BWmSQuH+fYvh9R9yq0vQTKS4SzkxQOnWZ07ylu6h/C1LtsnSKJiBtt\n3FSgd3yy6OR0Az1OcXQNFYYMO1BAsGfLNpKZBUIR8xcf+RClYg7XtLBtmziOGRwcxLUdLNNCSUW/\nBe/75Ef42He+yY2vfQVS10g6HloUM1Lqo2BZmI5JJ+jgWia2qbO0soKVL6ALwdTiHKMX7Vhz39z+\nAlY+g2bqSAU5O8Nw/wC5fB47k+meRFZ5ms7DZ4QQxJ7CTGyqc3WqizXKM7PMHDvB2UNHqZybpb1Y\nRqkUIwYpdF7zO++if+d2cpGGTAPumV6rSfpse96cxYsUUbIKwFMaJF0N9nbN59tfvZtquUqr2SSb\ntQhXlhEtnatuugrdzTB/tsbxRydZmWiz95F9ZM0sjnCZPHJyzRrj20d41S++lqxbolNZ4jN/+0WG\nB8f426/+Hf/Ph97FHS+6g7CVMnN6iRdf/RrO7j9OwTaIgoDePbvQhgeoN5vMZzSUnadV7MceGqU1\nU+YSu0CqR9x2y0vo37aDwpaNjI6N0ghicgWTWnuZjBegqk1MPyGIEjzNZl/a4UkRczgL57wAo2eY\ndSOjhGGDpqbQog4ZaXP0Q7/MzBc+jKUFhCrB0wLGb7qDRidCCBfnouvIbr4UwywQdpZ58NtfxBAB\nbm8GU0nWbxnD0CySNCUbK6LIR2gCU1NYaYy0JB2vThRGGMKm7nWwLZvLt+wkHOlBFxrSMHjT299M\nj2lStHO4lo2WRLiOSRh5FDOCzet6cHNtQhnyure8i6Ghke4TX9NQUpAgicMEaboYAoYNhZA2Q0ND\nWDKlaGb5yPv/eu3mUDpK17ELOfIjJXLDJWTeIBEp+lOIge64M3SbnlEUEQU+sUyI4wC/1SJstEm9\nOoOjg7h9veT7ekFKQkMnVRoq7mDmu2ya+/ad5JqrL3nOPfu8QfSjKCERgNBIlcLMZLHCDlITVCKP\nUqmXTqVBp+EzMjLI4SeOkkifhbklXNOESCFViIp0Yj1h5zUX0ghrwNO1clO6mDrYtmDLRdv4l4e+\nw57tO9i16RLe/+d/xd/8709BrLj7q9+gE/j0aBahUFjC5sCX72anozOp+1hxjFMqMbp9C3sffoQc\nOplY0gma3PONb3PL+i3kG02E52G7Jj3WAKnusiw1WjJiUktAxQSDfURRwAW9w7g5l+W8z2SrgzM4\nQNWUyEh0u+QCRGqQCp2O7OBGJo7poukpBz//YZwoIJUpqbCQfpO60FDxCjnXoNHuYHRShlMXt+Uj\nkggMmzCKiZB0fA9L17HNLDZZtFwfahTaHY/UD5l4dC9LUrHrbT+PiBNuvfWF3PfVb1Crt0kSExWD\nlSiKjospEmYXW3z+A58kKY3xxv/2Lm597e2ceP8H0KWgp5hhZWmWWCt2pScMhSUVhVye8tw8pq6j\nlEbY6kDPM3fHailZii4tLOkqlax8qmv3bJCmaRiYha5aWam3B9Iud9hjP7qP9RfuQBkuSoguBS3x\n6khHwHvf+3s8evA0E7NzvO51r+b97/2Df3fPPm/O4ocxSpNg24RFi2mvjl3MYqQKo5PQqLdplBto\nls389ApaYNJebLF7yxamjTLLSxXcnE1jpUZiCBYWztGzrn/NGgcePo6INMrLZb5z932cOj3N8R0X\ncMeNb2D69CKH9z+JihLiToLUEqShUdQthryErbUWudFhyksp24Ri6shhint2krEtem2bTrNFFEVk\nhcF00GRIkzQ1nRmvw6Kdcq69QmhlMftcevQM5vpRxrJZTj75JO1mg9RrYbkOuVoVz9SoCEGoEmSa\nIAwTVIxQGrFMqUgDN+4wd99dbA4UJikrlk1h6wWcO/QD8oaOp3dZF1tJTITG/L7HufHqF/DE3oeJ\nwxjdsSmODNNKYjLFPhAC3Qcnk6PRaZI6NqVMjv5tm9j4gsuIkxQRxqSGzu1veCPfuOsHDOuw/4nH\nGRjqY2FxhrSt+OHZGR4/Uaant48HfryfX3rXr6M0ixDBQq2BryRposg5FhIflcSMrx8jn82hCUkU\nJeQya1HHT7FgAl2E1ersDP+6WqWecRSDVc6AOEGlCbOzE1x86SVI0yJa5RFTKGQiOPvEDzh6911c\n+9Jbue4FV7G8cQTNcJ5zzz5vzhJEMSQglUa75mPaGrWFJVSaEklFZamOFgksS8MLEnRhUZ5cJG/o\nlGcWWLdhnESGWHoeSxrUfY+w7sMzCF6OHjjN8PAgJ49MUF3q8PZfeycf++hf8t2vfx9ddDl3K75H\naiqcrIVScM3u7ewu6/Q/PEEyN8t1Oy6kfPwsw0IQ1eo0J87Qq5s005iCdCYvXQAAIABJREFUqVNK\nDWYadWqGQyhSqq7J1ou2Mp4oCtkStShCb1VYXJinNDpGvLJCJxSEYYvMyAD9SrHs+7QckyQVFISG\nSrpqXJqU7Hjx20lHtrDyjU+TX1ygriXEmkHu2psxxrfQPHMQkdRxM0Vk1CJKUxJdpxkpvvvj77Nu\n4zh2Q9BG0IkiOklCbX6hS5qhBJWgSbZQ4Gx5mVYS4ebzlEOPcMiFhSoXXn85nZklegqDoMfs3nkJ\njzzyAL19eZYrdVrSYTmWLC3WMBZWePhXfx0NiVIxU8t1DF1nZCCLrnWTfykTQr+F4+QQcYRlmqj0\n2awq4inxJKlACflvuMnqbz4zwqiUVGgIKUhTxeL8FCMb16GkjkwFqVBdfoJUMXHPdxmKPA588TNs\nW5pn6MobID/2nHv2+ZuU1DR0TRKnIZZlUe40aDU6ZDJ5WtUWtnDQLEXYqJBqgPAYKPQRegpdy7I0\nVycIWxSzWcxCTOR50MytcRbasHBunmIxS2W+zEc//Ndcc801/OIv/Cxve+ebaU+2cBILZ8DhnT/7\nRr7+5a9S7Mlx/N69uEqnlOjMnZ3AzGQY2riO+/ceYKuWRU86SKXIJylukjBcKNB3wUayroNZqxJr\nNmE+i392hnLWxl1YoqEJwlQRJwpdSIadIqnSqUYey4HH0LatTD02h9BMiBUtXaOUpjT9KsValc7S\nFAJFIhSJlEw8+kOSH99LRvpIy0EPOySajmEIojRFtww0PUt1eh7DyOAHsCITpB+QS3XymoalpWit\nKr1mhtkwYb1p0ittTk1O8+u/+z8ZNDPwdxCbOrVII418oiRA1yTVSgNpZZCxIk5DfJkQoQiUDih0\nTe/SqxoGZxfL7N4wjCZ0EBFefYVH9h/qTlQmnS6H9TMsEQKJWOX5+7fiyb9hKkUQEmMj4pSl2UnG\n1w0TYaCnEkOkBEgiYmi1ibwG1SQksRwOPfgDNCNl6KaXPOcSz19kCWNCrcvIHlVissLGsTWCICKv\npdhC0JftQevtoeN5OKZJb28Pi2GTyAswHJecbqE3OtSUzbqdG4i0tVQ2ofAxQ53IDxkb6eG3fuMd\nfOTvPsY/3a2xY9cu3A0Fzpw7xsWXbeRc8wSv/6WX8vUPf4lrsxfRqC3gWBaJ0Am8lHHdISXC0Rzi\nUMfI5DE6HmkS4tWr+JUSWhQimz4TDz2OrUHSbKER40qDTJySlzY1w6XWbtGfsQlSRS0MKS+22HjJ\nVZw0B2hJhRG10eKYWOk0H7yLqtfC1h06RhYRd4ijFqby6Uid0uAGnNhDiQTNNjCFTiAEKYJ23acT\n1THjBE1X1OoNRgaGCGpNDNMhCGM6WkxdpLTshLjUS8bOc5HSCQydBZFABGasMERAmAS4BniJQGaK\nJGgIATL0MbSInmKR4sAouq6zXFkm8HyCwGe0t49ESfrHN9GZmsDVbfKWwekwQotD2s+aUBSksMoS\ngwK16jwK9ZwRJkUiBSiZsDh7jj1XX087TNAsgSJBYmClCZ//099DD1u4loll2WhKsvfBB1kf/Bet\nhqWro6UJCkMpRmyHYdtm3HXYnutlfaGIkUYkkU/B1LHimPLsDAvLVUg14o6PI1xi1+TSF+0hIMDJ\nldascd0rdrP+whH8TkqlHfP4kUO85w/fxxtf8RpIOrzp11/HHddeyWsuuJnq3gonHznGC4sXMOBB\nftdW2LaJ2nA/0yMFzmkJI5qGaLVYLhV49Sc/wsr/x9x7x1t61fX+77Weuvs+vU7N1MykTBqkh0Ag\nlIDKRW4EAbGB3guKohIvgrQrCILG9lPAAiIBBEFMCCUkkARMbzOZfqacXvbZ/Wmr3D/2mUkmavT+\nuPeVu+Y1r9cpe+9nn/2s71rr2z7vXA4pLWXpEi8sEc0t4i/VsY0m68fHKPWVcIQg0xZH6V7otFIk\nE5KVqM1St0s3brNuvI+TjzzKWVt2cvHLfxzjeAiRoqxFRy2ioMA5b/o1dH+IcgSxESghqZ69k30V\ny75CSrvdwJycY0S6vO7FL+WtL3wZ73/RS1gnXXxpyBuFjBQplmbUpY0lDhwiAQ2rOKEld55c4LvH\npth3/BCbU7BZTGYydJzSyVJSq+mmCq0UWmU4ArJMkakEg6ZYyuP5vZ2lsVqnVquRpim1zipTi8vc\n+chBkuowi5miNFjBDXxcx2dm9swSfWdNnUaxpless54RmF4yFOSaAy+fUpqxIHGQxtJaXqS/UkTJ\nAoWwgCMkxoLBgIH+iuSO7z/AYjchQxDkCxQrw0z/8MFnnbPPHXJCOgjpIjEoa3rlBq7BCQQ61cRG\noQJJPYnJdVNKfp6ucikEHjpvSZI2jSRi5KwRltuLSC9k5njzjNqwc87ZwyN3/z0ELp2u4Jorb2Df\nsUNo1eD8C3bx7Xv/iWvTIre//WZYMSzEyxQGi+y58qXUZo5RcxO++/ij7N59Dnc9+kPOzzSulGyo\nlpm5405GopSq9dBKYByPOM2oaImXpfRXysx0O1TzJUwRvFaXfbMniXIFZBAiHBDCEIYh+W7Gqm7g\n50Pu/trfMSjzOCag47sYncfdtIW9xw7RbS2ilMZ1fToC5lfn6A6XyKzDDeu3c2F1mEY1h390nkIr\nxkZNfNfimp4fFDsOq4FDPXCQnTaJLwjyOSp+wOUvfAlP/PA+gr4iZxXKLNaXmZk9gjCGxhr6XFiL\nwCKEwjG6d89MgnTBcz2EkNRXG9Rqy7TX8B9aQKMVY6yk6Qo6BzpIL8eG44dIEk1ea4JCcMbcuOMv\nPktQrdA/McH49q2UxkdoG4srNL6QaCuwQoJ1cOjxuTMycMAXGXMzx5hYvw5sSmYEjjgVJnBIu3VK\noUdp/RaiqMt8llBdN4RxHcqVH7FE///W0FmPhBurnopJ6jqoQKLQrHbr5IMiWaboZoaB6gCqlZDL\n5REyYTVqEYY+aEWyuEpdhOhynvrCyhnG4tsB4q4l9C2EGZ/5ymfYvnMzV1x+Cb/3kffzvne+jeg7\ni7zhS//I1INTfPrd7+Dx1gmueewAc8kCHTdkd3WYhdos242LthofSWF6geN/ewsDOKSOBmvpNJvY\nvhKVQo7+RHF4eoZmluF0mjjVKr6UVCfH2LBuKycf/AFenGCa9Z6f00jI8j6222UybtD0JTmR4OoY\nnWrMwYOsnJjF+mWkrCO0pV0UtAsZAwNlZLXMw4cXWKd9RKdLPrXMi4h7Sm2ezClsBsoXWN+jbTLO\nOu8cWicWEJ5DVwpqqzXU44cZK1eYmjqJCFr0DVfQLmRKII3BpRfqt9airMIkEfmc3wthW4jjCM/1\nmZqeQ6UxcIpJacEaHLeHiVipdwlCzZ3f/S4ToYuHoLG4eMbcWGw0UMsrzB05xtTddzE4XuW8qy7B\nG51AFnIo6SJw0ULjZ5Z6rQmui+8FNNorrMzNsWHDWXTnFihPjp1WyTRC0Fw8wmo9oZkbYKQUYa1l\naqnNhkmX1D2zkuCZ4zkzltQYdKaQUqISRUSPH+IInxSXZqOFVj1HcaFRJ1RQdHNYmeI7Bmk1Vmu8\ntqLWWKa6fozR4IxgPV/8/Ofp7yuwZfM6atEq0zNT7HvyEe74zjfprwzwrVvv4dqNV/LQhz/KI/tn\nuCyocDz1qCyvsLtc5ptxjUhkXLAkCLspriPpEw7VxKCES2YVwvacU5MJjBVEnqRsXJ5cWWbDubuI\nG6tkhRxOZIn9PMuzi8zOzlK2liHroNOIYphnRRsGq8PMLC5ROedC4tknyc0v0ykl5CMN5Tz5XVtY\nuftOROhi1/WhA0ucdGmhuG84ZHb+OPmliGu3baf/2j2s2znO8rv/mHCxjVfOUXI8FpstmuUuzW4b\nN9V0pUQ5knrzOENbtrBx54s58viTnOP79AuP4yTkrYOWTxeD8LAmpdNdxXEsSAj8Hkio1Wrhil5n\npOM4vVDu2nN7dWGCJDMYJ0dDgl/KEz6jHaoRZ2CglWg8LYlbK6iZr1EKFf3lEl5fEeM5pDZgZnGV\n5WZKnDpEVrPxvPVMbN3Jn/7hX7J99Cyuf8NrcIseSrh4OmNlapZH9h8hFv1InRGU+vjirffzhpec\nT35s+Fnn7HNmLBaBWGtBdVyfjmMgcEmSDGEcfOHQTWMQhvmszuS6MZbShJFqAasNUcfgCo+mowhs\nheZUE+FmsP2pa4S6BVKzMDWFDV02nLuOiYkx9u7dx1mbt7Nlz4U8fHwfR77zZSauuoLVWw+zZXiE\nMAtYroQMzQsqxsFPu7iuZMx65LEYF4yVazKkEqkM1khc6dO3cT2DrTZ1Ui649HLu/eZXWViqY5eW\nWDYZo5NbKQoHqWIC6VLK5Wl2YzLfZdlJGbnkeQyPbOXJx+6HQBJmeRIvIevWmHvoYbzNfbSrAbFK\ncHDotBt0jOZA03C8KPntP/kwV1x8JRkZD37sTwj6BrGLMTLqImSA7xiWZuYolgtk7ZjBUoVIWHS9\nzvTjjxGNrSC15PD0DC+orOOLiwfIpDyd+wCwpgdedRwLxjnd+/7QQw/juh5Wq9Nq+5ZechB6xqON\nAgFbdu6mWMmzabDK9287EyDUijPA64nkoWnEsBDnqAqDv9DBpYUnBWlmaHsu850E1y3Qaq/ysjf9\nODLoo70siEdzGBxYgypZJPNTR+lmMJqPOVlv06wJJibHyLoJi/UzyXHPHM+Zg697hDq0BRO4ayJt\nKZHOwDVUh/NIX4CQVEarrHa7zJ9ssu/gNOdf/XxqSUSzk+OaH7uOrspI4gSpz9xGR0aHGCpUyCcO\nZVlmrDDJ2VvW49o85563g7/6u0/xrQceon7+Fo7deT+dfMAFZ11EJzBUtm1mfb6Kt9qhJR1Kk0NU\nogyhBdz4KsY/cRP1goOOUjrCIJXgRD3i6IljTE9NM3dyhn/Z+xgrUw3KnYSzggrrRY5OZ5U4W6Ho\nBxSMoOvBikyxLuydnkfXljn49T/DwWATQ4ahKwWrYYyqRHSSFvmszWi+gh9DGGkGu4INpX7ybcXf\n/sr7ed8bfgHVlrz41a9kOV7BqRQYcEKkSRnxXYTjkJ+YoOtp5manYHURP04Y8ENah07iakEclMlU\nm0qS0HbAOrpXZmLAOCBdMMKgychUwlK9SWwNxiYYm2FRaJNiTAZGYXUGVuNkGsdYoqjD0QPHeXLf\nEWRyJvg0SxUqTcniLknapWUSFulyzCQcTSQHEpcnupL92uNEIpiPEpbSiDaafF+OmZkVEp1BsQA5\niZAKFw9lY44cPkI3KDMqLciAfSdOsHF0iI6T8blb73nWOfvchY7XZHIybckigywaZKDwpKDbirGZ\nQBhJGlmSpYQoiSjmC2zavZGHDj1KcaKAWo04fHQf+YqkNDDISnzmytCWmkS3UZ4h9DVnX7KdxdpB\n/vLmm7FuzB1f/ife+8tvY/XaBn/5ul9lJ1Xc//Zyir9yAr73OLmsyWh/Ga8VUzzRYLYKJk1pff4r\nPPT5r2BJiAs+sUpJvS5quECrtcJOPEqdLueSoxVUaGWaw90lZuMW4/0DjOoccSelJV1qcUTiBFSr\nQ0yWB2i3O8TSsuncs0niBk8eP0JlZJBSILHCILXk5PGTdNMaeD6u7xN4AWlbg9R0XM2heo0o7XBk\nvsYXvnY7N732p0jjRZRboN1ukbVjVh7ax87BIdoFjyjqkPg5VrSm3DeAU62w8aIrmHrgLnZWynQO\nH6IbuCSuJZ+eyqSfGcY9tZOcEql4uvSRFhIrLEmc4LseY+sm6eqMuZV5TMfFqjNDtqlSGK1Q1pAp\njRA9zosSEmEtSmukdHA0+EGAUgYjM0rFkDTV7HviSTKjKJVKuK6DRaOEJhAeMwtLJFGTbmrIlVzS\n1JJzNJ1E02j9P6qiH2WaRNserNNKdCZQyqA1ZLFleaG9Bt8UuNbHlwKnpOg6XTbt2srzX3gVbtky\ntzTPtrMn6KQdkvDMeP2ei8+jOj5GcXyY+ZUVbvnyLfzw3gdR2vCLv/BW2krxwJEnmGvWeMWb3sA1\nuRH0Z7+BaHZppTEraZOuiZEoukrgd0JKbYfAZqy3Gmf9MJf+0Xt59fveR9sIOrOzRHOrbJA5trkF\nolqNg3qZh1dnqDsZ5TBHSTnEQ30cyGmOBUB/H+Pr1pOzltbxY2zuH6NfB5x8+Am6c3Nsf8FFbL1g\nF8L1SBPLkROLuG4f5XI/v/Oe96CNJtMpcVJneN0En/mnf+R1b/l5vGKFy656EUenD/NjN/4kq2mM\nu9RiY+pyLiFBN+LA8iwrFZ95T+IV8mzauIHQk7QWZmg9+SDZzDF2NDrctPv5BJ3e0apXt2ZO075O\nGYl5Sq/1tHbXKYXIMMwzPj7Jhg2buf7lN1AolxGu4JILzqfRWqKZJWfcN2U0SZaSad1TmrSQZopE\nKbpGE1tDYnsBh8xonCDAYNiyfQNhscTU8VmEIwlyYS9PY8FYg6NgqVGjMXuE0BHU2ymlQhkT1Yki\nwdj4v6mt8tTf9X9s9v9vDiUFKZZYpwin5+D5bgWTGKT1kCYg6aYYYtJOnTDvsOvCTUyu7+fB+x/l\n/nsfotlKaTQSdu3ew/rNW9gwsfuMa8wcrXFyZpnZ+VWK/gAXXXI507PLvPlNv8T2zRewtHeJm9/5\nEY79/i2MfuFBQg+C+x4h9TI6vkRkDpVY4oiAqL+fjb/7yzw2HOAkBrRDuBiRJYaZRp0+fMaMpOXB\nt5sn8bZs4JGjh5hpNOmvlOlXHnWj+IGtsf6srQyNTbJhcIKCBkFMe3GWUU8iCz6JNVgynCTl4gvO\n41vf+Q65QoFas8W5l15Exyo2bdvI/Q/9gMHhfv7ra1/JV/7xi/z15z7PF77yZc5//vMplXwOHnyC\nydENtJcahOUiOT9ASreXhQ8cWlaRGxxg13nnE6SK9sIcZaHwug2mF2YItpxFo9BHljhcMrgRq01P\nlFvr0wJ6WvdYnI7Tk1WtVqsUSkXWb9zA5OQkIyMjDA4NsVJbRUjJvr37ufLiy/gv199Aslpn+7b1\nDE+Mn3Hfoiwl0YokTUmy3jVTZUiVQSUKk4JVEEcpwvHItADtUBoqI0RAs91BCqiWKz3ZKOEgjWZx\nZo5UabadtR4f2H/kEMViERu1OXTsJP3VKs82nrudJY7odGOUFkRRSn21iU0U1WK1J8SGwUFTdHyq\npUGGCyN0DnUwtQy93KU2vYRRktff+BZeef1refXrX8HlP7brjGtcfcWLueDCS4jThLDo8flPfpal\nx5vMn2xwz99+k/21mEtbfewwIZ2laYqZpJtEtFzDXLYKToHEdJGu5Xkf/l1qF7yAK17yerx8gHAU\npUwx9VvvYf+f/jEykBRMkX7yjIYVHnroAWzBxw1CFkyHI6Fm8/B6tmzcQt/GdbS7KbXWKvHiIo2V\nFXKuw2C+yJNPPIybyzEhXdywytjkANu2bGKpdpjfuOnXWV2dZcuWceIkZseObbSyiBe85qcZ3X4x\nyyttbnz9z1AI83z243/MH7z5Z1n87J8RimMIJyMKBSdtRL3gMD44wK7qBMvTcyxNH6NuYsJmRuRI\n+vsHGR+YYPOuS2j5JWRflZe99DryUpLP5ynketJDvu8TBAGu4+N7ObAOnXZElmXMz8+zXFuh1qgz\nW1tmcuMGpOviu4apvT/gY7/767Q6TZCQD8/0NbMsIVURqe1xWLIsQ0iLkAbpGKBLSoQJNZloYWiS\nZZpqaQg/CPGdmEpYojiSRxiLtD3tzNnpY/z0z7yesmuxro+WgvFyhb3+GNe88e1c8/KXPOucfc58\nlsAVtOMuSdKlWCxSzudxrGB1eRVHOqgsYWx4iFB4EHqMbB+mbzKHG0iumVjHA/c9StbOuP0fvs7f\nf+qTnHPdOTTsmT7LP3/7nzj82EFU3dBwuvz8G/4bf/3uP0ELTTvI87wo48a0n/yJBiuZIH3dq2h/\n8jOITownQpYmRggXU8IYbr3pdyhccwnHb72dfiKkcrA66enrCkNZSxzPIdCKjusgyyEn6vOMX7gb\nsdxgXamMjA3t1TqJtmStFi0B436AMuCUCsyomMmREezJCE8olCc5emCKZjvm6mtezAc/+BHiNONX\nf/Ud3HzzJxjsn0TFCU5i+O6tX+fR++7CteBFCfXDxzl/dITWunFe+ppf4/Zv30infoT+/n4GvRz1\nVpNOlJG5BlldT5D2WI2mFJLgMj97lJOdFexikwMnTzJ0vMCOfB+PJ43eSs1ToeQeOxLAEgQ5LOq0\nQHe1WmW50WR2Zoq+Sh6TGc6a3EZ7dQdKKZK2wnfPFLdbWJxb496D47iwpmLpyV4vi8XDNZaMBGUT\nUm3xnQraxCjTCzwcPryPd/3y2/F9SRBIcrkQ4wiiWhPTddl/+BCdUoGlIzOM7dzMn3/lK0j97Lji\n58xYqjkfV2mM6CEKkiglH+Yo5Au0OzGh55BEXRAeKhXMzVpaeKSdhKyT0Vmpk3YUJ9IE42rabc30\n0vIZSckLLjmfdFFxbGEK33jc/KFPMJyvoNpdBoXDW8tbaaxo8o6lVvU459du5ORff4GubBH19bH9\nda9i6ff/DOPB+Nwy6iu3s9mmKO3R9gVuJkgcQVMocp7H+p94Keu27+bAX/0NL9t8NkeXarSKQxjj\noVaaLHaa6IVl3M2b2LRlPc2pEwgreqzFtEPHTclXB8lJByfwWQodvvGtO5ibb/DVf74NbQsYk/HR\nj34MTzr8j5vejV/I8etvfwdbB0Z51SuuZ6AYcHzqMJdfdw3brruKxEo0HsNBgcd9D6li2p0Yx1oq\nLtSNws3lKeZL6OVVarUmIk0IdUbJC6lOjFE/NsVA7HBZeYS98SpGGVgT57a2JxR+iggWRRGs8VaE\nEDTqdRzhoBJN0s1YN1jGpDFBmMOx4BgfnZ55wJEScCSOFPiei8SSKk2juUJ5aJCJdWO48Tye3wsz\nDE2uJ1ccZnr+BJnWnHf+hSzuP04gBUJrsq4maiVoeq3HsQ4InBxK5qh6OU7sP8H60WGuvuwKvvfN\n7/+7c/Y5M5bR8VEWF1do1Ju0ux1AUl9toa0lyVIkkriboqQCz0UvtGksK4ZHB6lNr2CVxLMhgesR\naYdHv7efsYmhM4wlWY1oLtVxrUe0mjBUquAYh2qxxE+744zMS8Zv/g32v+tDFFcjjv7FLQRJm3rV\noVtbRX/wcyxXM8rtDNf3yOkUKwWuHxBkMZk2tPwcrnTwY5jYuptudYiT7Tby8QPM1uuMTExy6InD\nDAuXQtGlWigRGU3OCel6IVGWkbMWaV0qmYNWljj0yceGG17zEyy6ijjJWO0sMDvfxjE+y8tLzE7P\nYLWmnnbItTs0nAaLUwe5+Kd+kvNe89JeWbu0FLIAogTKPle+4Fr2PfQI87NzuFJQsS4T5SL1NCXM\nhczPTRN6ISXpE1iN22rRNzROI8hR1oINxQJyPkESoE/lUdb0i+XpUvmeo3+qz8RaiysEngzQicvu\nzRtpt9oY4eJohevkOdW5cmpIaRBSgDBs3b6d177hjWzdsQO1dr12c4aP/e6bUGoZLTTTsycZHV/H\nhZe9GVfkeNELr8aVcMc3vkmUZnS6Ea5xyUmXVr2GirskLnTTBl5lhDDWHJ87xue+cman7TPHc2Ys\njx08hFY9fJ1wcr1QYWawBlzjoo3BGI9YGoTVCJkhpWXu2BxYB6sEwggK/WX6fB+VKfRi3FMqWxv3\nfvFOSn7IT731TfzTrbexdWSCo8cOckGhymWPRHhv+ynk4ACTjZTjWYz5w09Sz7qIWkZZWNr+AgXl\n4Dt5tLW0koysUiSuVmnXlinHGdoqfOvS77g88rE/YlmklKKEms7wQg9/eYmBMKRjFHGn26sGXGlT\n3lBmpV6nFPhMOjmmrWFed4miJoPVArlVn8/89S2cNF0cX+DmqsSpIBQpwlqSdqe3+ruKchaRNxH9\n4UHU8tfgxGZ0OIb0K8jiODossueGlzP1re8hU0UhDFlONJ5Xphr47Dt4mMLEOGO5kJwCYyyiWiJJ\nExabK6TFMoejFBlHPbw2EmOfYktaq3ul9EJgMWBBiKeJdmtBmiYU/ZiLd2/hS3fcjzQeRuTIjMDK\nM0v0r/+xVzI6MsHk5CTDYxWK/f091LgUODbl4x9+C4VCmyRx0ZnARIbV2Tp7776VotJMjkVcdbHL\nS1/+szjOEFKO0mon3HPPPdxz5z2sLLZpHTjA+ds3slhbYqivhFcaJyy53P/Q4//unH3u2orTXt2Q\nFOa0mDT0QnwI8PxT4BwIQhelEpQ1lHN95HN5sm6MShNe+9pXcsstt/Se+4wVavfWzQRBji/9zV8R\nlIrcc//3Gd28jk0nwfFyPHTzZyg5X2Wb7VWjLsVdhNdTFRG+pasgC3ykE5OpGOnnyF3yPLb94hup\nztX55k2/jadWcMkQUtFpRBRyITbr0PCLREnCwNI04+s38fjhg/hugMg0M1Gd7PAqF5fyCGU5qros\nGEWu3IdX9Fi3ojgqE7au201gu0TJEtJJsHmwjsWTDknOQRddylHMjmqVC0fHmf/2NO3bHmLQHKU5\nPsrKa7aw5ZUbsVay8+zzuOcPPwmZYqzUj1cyTLUbVOdrTEiHUqzJl0oo3SVKLdp12b5zK/mRcfyx\nMaqjE9x99w8RM4ewmJ5S5dPGUzvJUwCkUzuPKy2BELzoskuwWYrSDgoXMPiuQD+j3OXy615Cf38/\nR44cpB13KTu9+q5HH36AO7/3SVbaByh4pZ4MrJXYDLTt0pjfy5F9GxkaPJv+8iye+g7RiiX084iO\n5pIdBS49/zIKA1vxc1uI4i7fvvUOvnPXXdQaxxDe/6OYPHBwHBD2lKq6exqUKZ1eaUQYhPiei+dJ\nXK+K67pgBWkSk2RNBgcrPLH3Qfacv4skScgFHrfywOkrPLrvMJs2b+YXf/O3+KMP/wHXvOJ6Nt57\nnJe8/x10fu4PuCAGmzRZrKbIWNLUGVplBI6Dl3SYHtnGtTf9Nt/785txjuzFM4aZB+7h8UNPUPUL\nxEkN5UCgJG2raRUkFk0SlLn0F9+OJwX7P3szm/Mt4m4Hb6wfF0H6xt2ZAAAgAElEQVTJSuK5JpGU\nTKs2WV+JcqVIMt9mYr5ORMY5uSHmo4Rz9lzGI/d+mX53hZyb0Y180Bbf8TBzXV6eDXLVeVdhak3c\n5YhGyWXh+rMZefOPkcsNoIN+rE0xrQZurPBcH5PzSOYW6ZqUySBPn0qYq9fYuWsjnUbG7k1nIXM5\nrCOwZBBnJJnlqle8gr//1leJPI2D869ARqfGqZzLKSSEFIbRaolzNo4yVCrRaUeYnIfvS9rdFjvP\n2XnG8z/w8XfhuYLzzz+Xn3zl6zEmQIiUB354Jwce20eUuHTDmLGKj2NdslQRtw2OkCxFS7jBJRQG\nFY36QzhhzGpnP3kvJi8FjqyRtQ6TdYokieTaS/u4+vKfoFDZgXX6+Myn/v3W4ufMWBzPAStwsCB6\nTTvCSAqFAlJY/CDA8zyKvouUhiDnUi6HSBFibER//1kY3aVa7qc4MkTSiamvNuBpyeCtWzaRasOD\nX7+DDVsm2PDkIluPNTj8no8TqDa6JNF0KMYe87ku1i0jVps4NiMOcgyfs4OT24bZdeMbefxDv0GC\nIh9BsbOKEiuskJIzLr7SaAKGhGBeuLhWsFBbopXENFWO0uRGRkfyzB09gbPUZMuWbdw5s8SCTXl+\n/yhl32MqSdiTK/BIq8bg5k04kcs21+HLt32R87ecRWk6wA659B+bQYeSRGqcLGAGzZ9/5y5+8bfe\nRuudo5S3X4zn+ax+7S72D01x6SV7aEydZGBslK4DsuwzfWyacuowMTbGzPQsG8shm0crDA0V2TDR\nhysc8CRdo7FW4yYZSaeByPlUSzlanQZCuz3FyLWseu/41WvbFb3YP8L2DMf1BeiU/kIBWxhkYHiA\nmbl5yqOjlAc2oIuDZ8wNwQLGuOzf9zh/cPIPueSiS7jhRS/neRecx/fu/BytjqSQMziVHlQpXymi\noxTPhcUj32Whdjl37G3zyMN3c90LN3No732M91u2bBynrz+lWsxTrJ4H0X4c5omaB6H9IDr+0eRb\n/6+NfN7Dkw5SSMKwp7BezOUJPIF0oL+vn77+fvpKpd4RJ2rQ7bYxWmJxsCR4gUchyLFOFvEnJ3i4\nu/8MYxkZHsZPJXM25oLWIEM/eIIBFVKdj1gsCKrv+xWiX/koC6JXZLj+x1/D97/+92xuL2ETB+78\nF2oo9j/wGL7VCBwSbRBIOlZjfB+hJK5jaXoh0GEijWkJyeFbPk8cBDhxk9sOPtxj1s/W2FEcZO/s\ncTZfdB6LM/NIQtIkRpuEwOvDKxTQicL25Tl26EnOBiqNDoKAzTvP5ZD0aC8v0RzJs+vJFQYu2MF1\nr3016lUvoCBdstU2kQuZH3HJ2XuYOX6UkbN3ELWgf/04J5amiYWlGDgkSQe/P6Aw3s+60RHCIFi7\nJw6ZtTiOoJN0kW4XmyUIbbj+uuv55Je/uMaMF6dLX6QQ/0p5pRdSdsiHAeV8gEATWUG92SE/OMiu\nyy7lisuvotbq8rF733v6vnXaiz1WSuiR6JR//vY+fv+D70Z0oTwI0svTaM6T6Tx+KPEdFxE6GCKG\n/Tzfv/WLXP2qn0XXKtx7xzfwc03yOZ9GNEiYrmOs8mKsG5BxGKlj8mHaa8V+9lPYf85YhBAO8AAw\nba29QQjRD9wCbGBN69haW1977LuANwMaeJu19pv/1muevXMjOS9ACkFfuY/RkRFcJLmcSzfukGYZ\nSRyT6RYmM0ghyecLtJodpONRLvbRNzxAmYCRwiD75uYJChV4mtyxX6lw4caz2fD8PTzxxt9ksqVI\nSx42FXTcmIFXXcPKOz/BrJPhbbsY77++nhds38Kx97yH2GmTz5o4d9/JiMqIHAdlLakQpMLSEbb3\n8TlwUvtc+/6PkCtJbn/7L1HKUjablCjOOCFiSksttFDovhJHTMrA+Cjjk+uIooTpxTqF1Rrbz91J\nZzWjS4eC1cweOEQ18NkSCxaOT7PtpS/n0QceYVk1yXcE7/zdD9B+8//k5MEFBgqjyJbADkj8oQLt\n7/8Av2BJbMbw1t242qX2wIPMtVYQlQLtpSZpahl0U849a5L+fNCDop5SSjEGKQ2OBU+CyWJIExxt\nueqSK/nUP/zDaTiRXcN9n7KUp0jGT6muKJ0yVB0Cazgxc4zAsZRGxnHcPHfdfSff+NoX4YVPmxxW\nARFKL+M5eSpDObaeO86j35+iuRBTKEmEEszXLJVSl2KQEDo5lEnQTdi3/27iCM6/eBP+xC7y1Ygt\n55zFWVv2cPDALJnp4JlJhLsFo5poLbEiBfvsANb/7M7ydnpi36W1709h8j4ihPjNte9/6xmYvAng\n20KIbWuMlzPGNVddSi7MUSkUSTsRnuuh4p4TXypVWJifpzxYpd1t9eqDdIoxLsNDVaqVftpRRNIy\n1NyI/XN7cR2fgeHhHlppbbzxZ36OfHGAQ7fcxvCJDoW+QUq//Fae+L2bCRsJ2advQ+mI9Srg8NIU\npWMH+eYf/TEDuos0DnXPUsg0ie/ja4MSFpxeJbQ2DqGB0BiEB14iiLtdQi2xQpAKhSMkA5lHU7fZ\nOL4OsW4YN8yhVurM7z+Mci3TjRo3XriHqSMHaTtlVlVG0K4xqQUbwgItG+MZn/v2PsJ4PsfSbAuv\nWuYLH/worBxkPF+lu3uQ7pe+Sn55FbmpwHCkiS7eglcqUHtyjs9+7A+JWsu0ozY6KNLyPFaymKWl\nFfasX4ejnB40eK091wJWZwgJrvRQWiHSLkmrRehWKedLNKPOad/EAtasBWrgDD9GCIEWmvVjY2Rp\nyuY921n6xm1k8w7CGo4v7MeVZ7JAQ8/HlZK426UdK7buHkMNdRiYqDIzvcD8YpdSLs/RqYz1GwO6\nThdpDal2qB+pUW/5LP3gEHP1OldcPkp/0MfWDW/Cso4dZ4F1FIiQILwIETogmuh0is7Je380YxFC\nTAIvAz4IvGPtx68Erl77+m+AO9cM5jQmDzgmhDiFyXs6DQwAJ0vYsrUX5102ijiJibOIWq1FkMsz\nNDqK60gGBvrJsl6YshgWmdi8ncHJCYJiEWs1SdJAOj6OIxHChff/96euEfZhdIe5j32OcZkyU+zj\nwp+8hvGPf5pmfZ7VD32CTtrEkz7rp+d59L//EhUdEMs8qZtSSAHrIVNFI/RICxX6lheoW4XJSRwr\nyGUCKxMeeN+vYG0PR+oqH9wUV1mavqCdC1kpFZgoVWkvLNCIYuatxi8PsHPreSwliqUsReUMS80u\nm41Pv+My16pRzgJ0mFHudymcv4Pnq80c+sdv0F6ahwDud1o4H/gwkz94kvUdgVut0Ng8QnJ8jvv3\nfprjPixmKVGa0jIOoZLYJEEqaLgVHlhZ5crRKp50MFbTUw22OK6HFC6Ztigvwe3GqLLE4nLJnnP5\nzg/vwVqNo0xPLFGKp1Toe+Dg0ztMNchTDnrl85e+8EXkP/wX9AlJuz5LwfXopGeW6HuOSyglxgmo\nZwlplIIrmdg0RpQZZhe66NjQMTB7QhB4AdLRnJxuYmRAKe+yYaLKk4ePcWDqKP/zpteQxcfw/GGU\nl+GIHEkW4zkarUOsrOK7m2kmX/3RjAX4OPBO4OkNyj8yJu/Kl/4E1eoAQnpswWKURjoScLBWEqeK\nMAwBgxAOWEucRoRBgNaaDAO4hLkBrDEgBPoZ+lOO5/K93/ozLv3zD7H3599GtdHijle8jnM7bWqO\nResU5buYKMYan4yEOGhRTBxcY2mFoKTCtTle+Om/Zjrt8sB730V8fAbH9ErUMynRIkAZgyss0kiM\n49FxJI7OSKzC05Jmvcn04iLDY8M0soj5RguiiHqhyolul0aU4vgNtvUP4dVWiVWC11dkyXFYv+dc\nFj1JLnBRrmTzy67lnq98lSBTxDrlvrvu5bBV7A7LXCj6mTu6xG2deWo5aFuPttCEjkBbSb5UxLgC\nL4NUWZ6YOsHuwT68WPeijlLhyh4zRme6twBpDSojilq4ScB1V76A7957N0brXg+87EU0tV7L3K9t\nLKcQ3H2VCnHXEvRPoJwif/TZT/Hxm95Dzg9xvX4mh4b5PkdP3zfpl8hkinAMqpMxffwE5dIQyliQ\nkq3rJlg+dpK88umuRizGNQaGq+RLFaJkmcmRCZxWgwvPPpe9B4/wnfse49zL97Bw4s8YXv8zKNUm\nNIqureOJJk/84G/5/pcewcsf5dnGfwQzegWwaK19WAhxzb/1mP+/mLyP3/xpWJPjvPrqK7nmmqtQ\ngl7JvpTInEOKxTFrsh6A5+dQ1mCkc+oNoq3FmjVW+zMlPRPL6oOP0Pi511LpGAKRsH26RSJ6S189\n7mJzPtZ1wSiEtETKMOvkCBUMKghVhGdgbt9e+jduorO0QuB45EoFkmYLbTIyZXCtIMEipEu7GHLJ\nq2/g23/3+Z5jnFoGR0bIul2mG02MyfCkhSxh57lno63L9L7HUarFZL5M3Fyl7lvWbZhgeHIEp5Bj\nLCyS6ZRAhLQHqpz/kpdy9+23kVhDxwuZL0FTunxBziDGRnASQ6gkxqQUpUQaQ9vtKesLKdG61yff\nQfLDEzO85KwNSCuJU0MQCFwp8ByXNFUYHNJM4akOjh1gtH8IMo3nuphMYbXFSnHaT3mm0tfKwhLp\nxAADE+vwsxUqrf38/gfewnt++wNUh85B2zPLXTaO7ubAkSfIbBchcjRXEuaOTSFEFZmBtR3GJifR\nGvqqFWZPHsMPXC7YtIHVxhSbhobpK1XJT27g6OwyTxxu8O53/S0idXnH+/rI4h2Ewxci4mVu/+aD\n3PdAHVEYQ8idwBP/7kT+j3aWy4BXCiFeBoRAWQjxGf4PYPJ+573vQloXY2ENV4MjeoZh4PTHfcbH\nbkEIicOpbd6AUFjZS0jKZxjLtz7xcbY0Imqv/E3EGqnKTxULIeTiEka2yHSGZ0EZRUlITmT9XPDe\njzM2FPLP73wLG9OIxbyl/tF30ybPYNTmRDGgmsakjiZxDEqYNWkegUKx6ydvIN2ynmywD7mckNeG\nmZPTzGlNKjWVTFO1Ei8nqc9MY7wCTpzCwixyXDC0bR3rN02Cl+slWo3Fi1MCBd0iVDOPwvoN7Ln+\nZdy//wBNR+InKQeKLn1akOtq2r4m0RJpJalWWGnRBpJu3Fu9JGTWkuCwf3aZi0ZGGC5C4hiECJEy\no+j5BNrFGkuaKtK4TRy18HIBu7fv5OFDT/aOXuYp5x56haXCPiXaHeYKrDSXGNvch05WSeMmhWqZ\nmz71JX7h1a9mpK98xrnF1AM2DpxP5im8smX/E/soBz6H9h2jks9BmJCV8/z8L/0q0ydO8Dd/8v+h\nlyNy3iqNrM7uqy6ApMLSXEqjEdNOOyysRvzDZ36ARwB5RWxDTtz3GeTRr/E7/+NdLK1EDA+9ifd/\n4Ewu6dPHfwQzugm4ae2DuBr4dWvtTwshPsKPiMmzRmDQGNETfxZrp2VhBdI+bW16ZhOBPf3eAIFZ\nU1Xv/ezMOIL42veoRA6JaiEyjcay2BeS7ygaThfheripJm8cup5E6w7DQYCpRUwpS1fmSGQDpS2e\n9siJiBUHwiTGF10qmSbFJxUSbVWvTNb12P+5W0iswcsUygq041CJmlSGRjm6WiMvJF0yVGppLjSo\nODFZp4WqlBjeMcpouYInXbRI0TJAi16WWwvbq0lzXLAwNj7O1m6LgwsnMI6grA3GE7R1r6NSOGu7\ntOrVblkn4LH9MyhHoYTCGgcpDG3f5Wv7D/K6XTvxK6BSgZUpkRAIx+sh5JBgMowCpTQvuOpaHp86\nTKzjXm7FnLo9vZ57g17rz5f4ns/JuTbFgUHiFArnvhjcPIv334MTRTR4xn3rLDDSN4jrBcTNmPNH\nd+KERXZtOocHH3oQTQxZwu1f/Suk8HjrO95CLizz4GOP0J0fZLWxj03r/wv7T9yDyVqMjK5nqbPI\n1InPo1SbweHzGK5ezNbLP8KWS2+kduiL+KaIOFMq+1+N/908y6kj1e/xI2LyTq1CklNn3DVWuuyB\nPU8dqazpOYmnSiie9gI9DauezfR2HXvmziJShekmaEeRVxmrOcm5X/gct//cOwkPP4jjlOk4kqEb\nbqA+u4j7yH2Mp3UWPvU7HFUKS4u5gqSUhWSOoouDFmslOZlFGEnmu8hCBR0nuElKLCwyVfhYEitJ\npKTr+vTpmGxlgYlilZNJg6rvgQloJjFef45O0zA+PEzgewhr8aVAOxJtNYky4DroLAEs0utFmFzp\nsmvH2SzW67TSqKfiaAx+ziPLUpI4I0liLIYgB0GYkWZNMuWsaSD01OktcNxmTHXa7HLLJDlDYDRC\nu7hrCi2utSRRhEm7SFPlrI2bUGmKdMTanTs1Q8xaWFmenjBKhyDq1B+9l6WuZdvFl7K0Oku56uAV\nirjumfdNRS0Wuw1K+RyFYqWXhyu4dFPF88/djjKaVKWYnOXwsWN87ztfZ2xiA9orUR7cSRJrVlYe\n5/LLzubhe+5j8yaPTSN72Lrxp7FCI60PokMr+SKeIyhuuJTlH36SQws3Pevk/083f1lr77LWvnLt\n65q19kXW2m3W2hefyrGs/e5D1tot1tod1trb/73XO9VpBwIr1jALTztGnQpLSiFOO4r/6j2tZYlP\n/UY8488JM00qId61heN9Zay1tPYeoVwponJQTA1ZkGPw597I5Ft/nhkriRD4epltpsmo8OjEDqs2\nI7aWhjRE9NIAVjikjstK3mHPL93IC9/6BrS0FIwm8xRaGhLfIe4vcunPvI2m4+Iog5toBgsFCrGi\n7PsEjsALPDJSAg90HOGIXuNbr7heEzgaoWKkihEq6+U8VIpVGb4RXLnnIjwkmdI9IblUY4wgTTQg\nyOdDtFH4QQhIrOkJ1J3Oh6yVyn/zyBT1xBLpFJVlaG0Aie+6uDZDJAmq20GnKZ7jUMrnQZvT/fZP\nby2GtftpobG6yPDYEINDZTaVE5yjd8OhHzB1560E0qKe4dYaR+L7DsYmZFmD2uIUjYXD5EyL8bJP\nkHbYOjrCusIEV2x/Hlftej5jYYnWyRkc2cWVIzz52BGOPjnFdddehGsVlWJIrNpoacHtokyecvhC\nfCYIclcxcdnfcNEVz3tWG3juMHmOS29LEGv/JNaeClyubSxGU19Z7CXFML2SirVHWGtOG5CwrBnc\nmStUqmLq0rLoOPT/1KvBWJ5817vxD+/FBDkQmkENrTsfZuW+Rym5AQkSN80T2oAB4xB6JWqeT2Qy\nEpP2JpjrEHm9DrzBhuHe3/9Tvv0nf0kfBtdJGVKGfu3iRBmBlrSDMrFwsdLHJBHVVsKQn2e1XiMX\nuMwu1hkYGiZXcHs6alphtEZKcKTBFeCKnk+nVYoDGJWC0Ril6CuWefF1L8YgyKymHcfUmg3CMEcY\nhiRJgtUOWuXw/RLGgLFPVfpaa8lnhmVX8u0DT6KjhK5KSXWGNhlGKVxr8RzotpskSUyWZFy05wIK\nYX5NRtX+q//Qm2AZKTvO2U7cbSC9Mi3t0h9ozs2fJB9Cqs8UrDDSRwmPzLqkWpMrBEgfmvUllqeP\n4+uEtLHCZL/D1skS6wYEkxW4esd21g3Arm1nc+mFL+fxB06Qz7s06k3+5V/uRWcZrIFdldslziJc\n71w6di9N/wDSec2zztnnsJDyTKaGRWCtRq8duRyrqC0ucOdt/8hP3Hgj2AJGepxavNI0xXXd08cz\nKWTvnPy0kXvBtQSTg/h/+TXyA32YSCLcNrEymKE+dDOiqNssffT3OOoJiiImcyypm+LoAOkornjL\nO1nZtIu7PvBOZOMkZAlJMErWrONLBV7AmM7o+AJjBDnr4yHIDGhX43Zb/C/m3jzKrqu+8/3s4Qx3\nqlmlkixbsmVhA7bxxNQB4wBmCsYOYIbEIQmv6ZeG5CWkQ4bXeelOd9LphKYTaDIwpDMRhpgXZggQ\nMBhPeJ5tSZZkDVWlmu94pj29P05JtgTtlbfy3nK21l1VS7dK9+qevc/ev+/vO9z/l/+N2bKLaM7i\nVU5qoMorptMWWdbnit/7U3p//EHOmiuIidGypr2HoBDW1+CFTFg4vsj0lu14Z5EiBmtw3tMdOnSr\nwyuvuJIbPv93dCbaJI2YgRkxHA5RQtZ9kHyDrOgjhcJ6gccihdxsHAYin3C4ylnyoGwgmAopAo00\nResWyucIl+CcIVWa17zyVdxy+x31inA1cRLq936yky8lcSPi1ddeRSodx275KlXep3Ka3HkmxiUb\n2ak8E1tJnBAkcYyvHEoEEidRUhI1BHGkIIwY9lbJ8yHDYca2uTPYs20aqcYpnaXfdrzhNS9CJimX\nXfpKXIj4yB/+Njv2nM0FF76c3edehFIRlgHaPE6TFxGiLk83njlj8E0fKe8qKpdRmgGBCnyB9wOG\nRZeok/CGt/wUWVkfe8JTapI0TU+6iAA/cAQAOHTvIc5/42uZrhzmH29mFEoImiWjOfunfprhjq0I\nG5BuhBYF0chgrGCqlKTWEDtI1Dh900SMpUQKmpe+hnN/5c/pXPF6XFKR6YperPEuJyKwICPMZc9n\nIYZIB+KqYLq3Tog6TL7sWra+7mfqYNCgmSoc20eKJz7yMXpPzLO44klCoKM8UeSwVYUIJVo4NJa5\n2SlS6RGuRPoKYQp6a8tIZ3BFxkTa4i3XXAceisoxyEZ4AaW3oARSQppEeBxBBgT10ck5V1twC0dX\nBm649y5GNsaUJc57DBIfqto8r6ooB32E80y0tlDmo5P9ldNpLlAfxbJig9mzdvHVr9/J4Q3J2OxO\nhDFMSsXzztpDKHunXLfSBkrrKJygdBGF1eSVorQKp2IqL3FBUWQ90gQmJzTeb7C29AhLh+9m9eDd\njLsBF+44k+ds3cqEdySjNZ63bQtnBMn8nd/igVs+z2DtEH6wRLX2KN3uz+PVh552zj5jO4s1FVop\njDMkaQOCw1mHFKBlgowTpIzxCGJqtCCEgJICvIPg6wwPEfCbF/30QIJzRqvc9ZZ3M6MDpZSESFNp\nj1aSmYbi2NIamQoYCVmwpKnk4FgD09xK5/gqM7bHwT97P8fiiKg8SlRIioUFtocVVvI1SivQsUZY\nSxISrGrwo//2F6jOPw+TfILFO77NtErwwTBQgYsvu5hDx49RKYWxFZEIxJUjvf9WfHuOO+ZHHD88\nzyte8hyUHZHSREqLlwILREISnCGSULqSEDQTrSaE2m8Z4xEW/tWLXsrXvvtNqmGGkIq00UAjsMES\nxxEhz+AEr2szyNR6iXAWgadSTb537BhXnj2DzHJQLVRq6k/XVBT5gKIo0LrFtrktLG6sE8IJSiU8\nte2mBMzOzjE2cw6v/8XfwgvH97/1ORpTu/n8336aBw/spUpO1eC7uveI8Q6vBFKETVslSFWC8ZYg\nFcI1wAgQCi01wXtaTWhMeZrxBspV9IYZQSjGx2aZnJihmcRkpSdUx1i4+xHyookrAs30Kpqnucyc\nPp6xxRKnDQSgooSARKn4KQV6QAoAgThRl8Dm2dgj0PXkkLVir/4Bt0nAe3KMV5aSjMiBlQ2Oa8+O\nIrCuDHe//yNs9Q7hAxsiouktWQWv/r8+wPS5O7jzLT9Lx8e08lXOMBJDitCB1sZRHnn/LyKUo5mO\nUYQKoxyi8oxExT1f/CIXtd/OvXfeR8PFOPKaRh4k3//gfyavCmTIkWNtQtZjMg8M2mM05rbQX1rm\n7B+/nvu++0mefdFZjEU5jhTnJLFQBBc2hVJhk4/lkEiMsQhlkdoz1mox3pjiR1/4Mm6972ZcVRJr\n8MFRGAhaIpSGyiM34XqUxLmA9xCcxBG4b36JPXOT7G4ojMvQoYUjoEKg6o9wowFRa5x//VPv4nc/\n+AcgJdY72EQorQgkUUykE6659k1AE4thtHyAy59/CX/yR3/FrUcEldOMR6dOQ+dsbfVqAupEkpcA\n4z3R5pQ1XuCspxUlxKo+VlrpqRDMdlKec+k5yDGLV22iWBDiNkXZZP14QbayRpoVbB2PSaebOBMz\nckOy3qGnnbPPXM3ia5pEXeBvnnU3UZFNmcQpW/pJNCyIk3dDNmniUHvvitP5mg7wCpMoslhz1de/\nxndeci0JA5TpE7kE5wSRrDvwoxS6+/dzpjakLsN5QASM86QIKiXwrkR7UELQsJIoNMm0xOuSyHn8\n8lFu+sP/SCQ8oqURfUshE4IFUXWJlOaSH7uOg3v3sn5gH5U0xGVJSsFMMWT10BHyY6t83wkuuehc\nxqJiMzErBiRaBlxwRFLWGTfS4VxFMLXzhyKBUnHROecz3k648ZbvUGEoipzllXVaYx2cC3WNFzze\nh/pzD5tyYDZ3nCjm5nseY/tLX0CS55Q6QioNroIqo+z3ac0adp95DhEKL+rdL5yA8wkE74iE49o3\nXosiY617hKTq87lPfZXv3/h9XOEwPNnUPDGMqYA6wkKi8MoRRRohJVXlUApUEMRKUBYFIonAQdSA\nqlIsLVUMb93LS372J4EZvOgQSEjSmG3jijOeZTFCoXxOb+NxukuPMVrISfXTu7s8YzWL2KSE+6cg\nJ+Ipfwin0iaemncOJwnh9ZPBo7VG6/iU1yi1IVNN5CteQcDSd4YVa3He4bAoW3F4xxwFisSBK4Y8\n+ucf5Tvv/XfIoqLEUniHFYEgDAmO4C2lrUgLQ1dVbLv+ana96VpC2sYFT2ZyMhNYJ+Ky115H1d5O\nGSdYYfHagfDc/rW/Z3joYdqhoopr7+Du448zYQoev+0fEZViaRBx60PLdE2gNAZHhtUBoUFr0Fga\n0hMFjyYgrSGUFS7LKXob2GzEttYWXv7CH2Xl+AajUcWZ27bTjBLwDiECztrNBVPXLc65k9/bYDgu\nBQ8vLDEsDKOqgOCIdQBXUAzW6a9t4EvD1u1nEDatW1EKAigliaUj8pbx2JINDzM1oVHNcf7ybz7P\nE8vrjFzFRm+j7iP9kKG1Yjis7a1OSJSdsxhjGPQHDPOKoCJKYxmVFTZIqlrYiTUR3/74xzhy910E\nFyG9AilxMiLTGikjRDTJ5MyPsOu57+C8V72Hs1/+nqedsxGWxuAAACAASURBVM+cMbj3PJVRdkIX\ncbpWTfyvHifO3Cfi0wKI01LOyiSwMTPD1ndcgxqV3Pz6t7PHefqRxIcWomrwgj/+PdIXXEaQUEUB\nXwyxNuBw9clOBQahRGkNtsIHS648XVkRmg3iyd20p86g8iU5MI9lVcOOiWlu+eIXyNwAnUTkIWAR\nCAyJA+kiogtegD5jD5FsIJ3ASUfTGQKCuNVhYXmJ+/uSlcJTekfAUpQ9ZCgpswHeFURYNBXC1T0Y\n7TwNodAeQpBMtCZ551t/hunmRL17+ECn0YRALdPmSebDyRtSgNgFjEy5/eAh1iUMhgVKK7Ssez9V\nMcQWI7QXvOtn31kX+Sc5SpveCRJe/IKLGJtIaHc6WNPgC39/E06MEyVtlrsbyCgBebrJnqkflaHR\nSKmqCmMMVVVhXZ08JoXAescoy6hMQBDR7RVUxlO4AucGaKtZevA73PaF3ybkd7Bx7Nso2SMJsp4v\nGDw9vCwJ9g6y0d6nnbPP2GKprMUJtSlJtQQs4AhPWUH1YnA/9CGFB+EwpiQEgQ8KH5261NK3/QRz\nywMe+/TfENvA1MYGVhW0nCeVhmFHsPDZryIuOBOsJfExFsdI1cechUSyaDZ9AbI+LalwISELMesq\nY3k05Lsf/gMe/PgfMl95DrQb9N0IbR3l8nEGZeDsq9/J+KWvIBYKFwKOnDR4sqZk2+wO0l0XMpza\nSiQMbSPwPqKvJNXxgyTWY32T510yxaMbY4RiAykTRHC0Ww1SLUkFpFISSdCiJISMyowY9TcIxhBL\nzWRrgrdc/TYuevbzGZ/cRlU45FM6+CcalCcbwZHGC4VyhqFM+Mr+Y3Slp8xNXTuJgLKGfH0V4z07\n53agkwZCgvKOyNaRHOefdx7XvuxK3v1v3ssnPvY5Du07yuc+93nWSstKr0eqx7BGsLS8fsp101qC\n8FhXkRUjjLU4X7+/Is+RUYyLEqxxlGWJdwaPwyGpbE4aR1DFRJWkVzqmOrMM59dY3/s1RChxIcKJ\nPt4OWN3/JfqPvA8ZVmg1n/+0c/aZc6RUGk8g4OBEzSL4gZ3lJFrsN91DTkomwslckBN0GBNO3VrO\nuOJCVv7k80RfupOiKvFJoC81RoxRnbuNZO8+qs98hULDhNQMCHR9ReKhGySv/cAfMr/U5Ut/8Bvg\nI851jpYQLMSasWorLuQM7ZC1xlm85B2/yG1f+msS50jwdEKgLSVHvvJJ2kbTCJCJNh0DhRZM9C13\n3HETL/rV30eedS5rf/dnbC0sPZ9jtKn5XDrC9O6jx4XcuPdu1rc1uXg6ZrwhIfYomdS9JWtIZIzx\nrq5HdP3ZuHKIKzXaN2nGMZecdxmjEkRjjAfuuwthS4I3IDxaxyehX+/cJtsoIDys9YYs5Y5tqUNG\nEKlQd/iLEYPhBpEyPP/C53HbA3fWjOYkQijHytIxtm7t8Gd//XGQCUE3eGJ+hXxkUFJTVQZkoDyt\nmVwUBUmSIISk2tS6hBDQkULrCOPq99lKEhAOscmORtfw8sLSGuPNJkmqECaQP3qIbKUgVRWP3vxX\ntDWo8hC9FcVEMsHM7DjhyMP46X+pNYtweJfXDNWnbP//CyoZsPmcDyeNEJ7sEivw4P2paNjyu34b\nKwpWE0MQgbTy5MqRvvBCLv7YB1jSHmFGJOUQqx1aCRIdU3gYCsWR+SXWu12KUUkcGUoXaO15Ma97\n34d51qtfQZpKtjdbTBcl87d+i4mNI0ybAcpXlEESBcl0mZOIjFGzzSVv/99YGN9KQJJIwURRcN+f\nfoCNL3ySSTFgrVUwnJjGtafwUqNx7N+/wZ9+9j7yMMntC4p/2HeIlaAY5iXGFMhgaEpoSk8cKpJQ\nkXiD8iXaGLQtCHmPMMpo+cCrXvAStEvZvv0ctKqbuko/6YRf964ksElhIVB5ydfufojFLMcLRyw9\niQpoV5IPVunEDa5745tOYi5WemIVMcodv/vBP+ZNr7+Oa171Bl500QsRSKZnpk/e+EIQVPbUaz4s\nHCYoik3T8RN1VFUajPFYE/AORkVFZT3DosAEgXWCygqIJxmFlJFLcDQxJuXw/ID5ox0O3X+YIw8e\nYOnQFCZrkcuKDT+gXNqgPHIXTzeeuUzJssBrSbCOSEebF0o9SYw8MXwAKfDySVGRD/5UlCwEwJ9C\n4QDIqiFSSyaHFXmkyRWkztC/7W5Gj+wlyJRSZDS8xMYBWRboKK09AKTn1j/6PWTlacuUyib4aIg8\neCuPv/8RoqLgXO/IIo2ioNp3L2PBkqkGggIXt9m24zzWD+wjEoaOaXJ8YUQ36jAm1xgqR1xW6MXH\nkTrDes2y2s62y69AFAus3PN9pC9YJiUKEcp3yYTkvtyzessdvO6i5zEnJMIZokSjEaTUk1sBWVUi\nVZ0K7bzEA27omZSCt7z8Fdzw9S+xtKARosR5h9YaISRlVZM1BapO+yIghKJMOtyzbz/bOxcx3kzQ\nwuHwlPkAX9TGf3PTW1jprtDwIGSCJWb/mkMIg8mGdOIUFUM2ypBSoqXAeU9enRo5sXj8OHlRMNZq\nQiRqJAyJC57gHb4q0FoTC00wDmvKWsYQN0ELghNETiC8wwdBkAErN6iiDrmxWDuGTdbRSkBX4wpQ\nvkCFp4/JE093J///awghQjGaR+kIdAxENTfsxErxoUbLvENIe5LPQ4CyDCRxgtyUwUI4qflyPpD+\nTvvk69z3qRfSN4FGMDznE3/BV958PZNpEz/MUVJjgsHJTU19lHLlX/45n/nQB+jfcSvWOlpeIJVi\nKZS04pQpF2jlEaXMkYAUEUUk6abjrEQRybCPDiU9CZoJ+pMTnNduMzi6l9jGDKOSyie0Q04hGzgR\n0RR5feYWjnxuD4eOz5O2DDNTu1ldmyfb1mG1t05hKjSCICUm5Gyxntc+69ns2tIhji2tuI0JCqlj\nSp9ivMLgcULjRIRDoqMU2WhgkxjZ7vA3n/8o43OOkSspywKROwQxSxs9gm2wnvcZ9HI0CSpUSC+5\n/gUXc/n0OJVKKUKCaY7R3r6LdGKaxe4af/SxP8IHWe9Yqo7QM9bw+itfwqg/4K4HH0bFEWVR62qG\noxFCCA793OLJ67b7o7NIAq1UMzM5wfjEBFrVzVWpIYo1WksiGSFlQAlHrBWp0ggliLREykCkFZGO\ngIBSEikkUtU9vPFUE0uFFBWdMVBKEMUxL7z+I4QQTq8GgGfwGLa8vIKQehOgsXgsIViCACcUNoAX\nGojBJwgSpEyJdLwJh3lqyfGT/2aQp/4fS2J2vvud5L7F0dV1qm1zMD1JoxXjVYEVBuUA72iZnLVE\ncNk1P4X2EUopSinpBYdVkkEJ8twLQJs6I07WvQkZJA2nuejFL2TU0MResBVNWwyYHiyQHTtMhKhV\nmMkUHWnwQTKaPoOLfuE3Gc2dTRCBYAq68w+g7DyNdcH0G66jawNZb4Msy07CusYaLIJVrbjhwD6+\ndXABVTTweY5yFdgKHSoUGdIUCFMgTUGocqoqw2R9yDPizPLGV72MWB+nMz7Ajwa0Isv4mGHXLsm2\n2YoLzp/mzLlJdHDIIJBRys0PPkyxaUwhvMXlQ/LeOr4smRmfYmZsBrm5SKy1VFVFojT94YAHH3sU\nL+pdLwjBKMuIoohG41RjOxvq41lWGhaWltn3+AEOHT3G8uoaWVbiHTjn63CjqsI6cEGSV4bSOkrr\nqDyUtv4ZLxSVDTX/EElhDBulY6FvWB4FVkcwKAUrvX+hyV9TM7N4HzZZtlXdWPOeYApUyMEOkSHb\n9NF1hGAIwaB0jYSdgDuf2rjU/tTFsuPd72B5rMPKbMxjP/9b2I116GV1xIOTaN3iaGeMxVaH4D03\n/szP873f+jWmrKNtFDIofBAIJ8gabV7y3l9nQxWUSmHlZvPUOib7G3T/4RtsHxmkiklChHvOv6KY\nPB+hPJUT9HXgut/9H5SNNiI29ELAzJ5LodtU3jESCuE9KTFR4jl8442MTU/T62cUZVl3tJ0jEJCb\naNNIwB3LS/z1vXez5KEqK4SpEDZH+YKEQEuAMBXSVgRfYKoRohxR9jaYVDu4YPvL2ZZu55Lz5phs\nGKY6hrntTbbv7DA+p5nYKpja2iAISWU9C0XOcJNFgLfoUJF1V7FFTqwifuyq19FqNmum8yZgUGY5\nvfUNhqMhQYF1jvV+j6TZQEf6B+pUD1SulhsgFD5Isrzk6LF59u19nAMHnmBleZ3l9Q2KypJVlryo\nAIW3AWNroMNYR1E5KutwDsrKUFYGkFShtpgrKsmwkKz1LGv9U4m4p49nzmSvdaKTHBOpgDOeOE0B\nw+GDe5mYmKA/GLF9x27273sEpSCOAmONcYKMmJieqjlhIsWegEBPG4daMQf+4rPMbAwIFEwOHDIO\nxCqmFzRdoXjFxz6M7q9z6y/9Ms9aWaFsVizGgqhSSCcp4kBkG0Qq4osf+mPaRiGwBBGQxuB0YBgV\nKKkZDwkjr3AInvOaN9Gc28Gdv/0udJkxJOah/Y8yaLc5Y5ixc3We/f/+J2lR4ILDh5hhJBh3LUq6\nlI/cTdg6U/sKYzHOAoLgLVIoQhXwOLI48CCBjTvu452XXU4qKhqxJ46buLKiEoEkScBZhIgYWY8P\nBV70MdEWdm59Dn5+kfXhfmYmI/pVjnQKH2uUhKQTmAoxK6t9GqqJEi2ObnS5aHKCICVFBYm2jFaX\nQcXs3rkbV1iEDXhpEapmky+sraIaTfrDIVVhmJ6YwIWANdUPkGBDqNkIhfMEKVEIEhkhg0cJGHT7\nrK6uoHTMtu1bSeOINI4pxlpoCa1mjBcBHTRBGKzzaK1qW+AAioDwHiUUEChKSZ51QTz9cnjGFsvq\n8jxTs2eSW0sUaSKlyG2JkpK5neegRERzLFBmI8477zyCc1ifEekEFwTeB5TSSGEJHqyxhOjUBfPY\nR/6W2SpHlYbIKXqJwEmBloGZrMTHCe6hw0zunCAqIFMC4xokwuAiGOVgY4OMNKoa0dh3H4KEECxF\naNB66VWEfA1x/72EylEoh1CW2I547L+/n/T5l5B0B+TCkFjNox/8z7SUphu7ehe0lhAqTBKQDias\nwzb6TGeSo3KNbLVHJOv8Ex/CpuZnEz5X1Li6qZunxwR86MZv8+YrXshOZ5mwdb0VAqAlQSmk0ORC\n4L3De4fJC9JWm3POeBH9R4/RaJXQiOkWEUYZqipG0yQf9Gl1GpheCQhufugRLv7RKzG+RCcprsop\nB8sknQZazvC+X3gvf/bxj7AyWqMKhqQR0R3lDPMRCTFz09NUZYUkIGWMlKcultooXoKArLQkMqCd\nRCuJ8W4zFU6RqIj1xTWG2Yik0WD2rC200xTV9UxNdOi02iSJonIWqSSmqNnWCEsUN2nGirKqjcoF\ndajU041n7Bh2+8034V2FFgIfPDaUaA3BebRsQtBIGWPViKrKQMYoMYUXMcgUVL2j2OE6Tzx2Lwcf\nvpvbT8tTb3S7TOUlJSWlCiQ2MBGaVD9yAWvtDt6OuP/3/wv/8J73oqTDKE/whgjH9Juu4/IP/SnN\n3S8hKiPiqiIpS1LnUHj82Wej3vluzvq5X2YZBQq0kGg0NrZM2mWim75JoQNNNPiCXBv02XO0p7dQ\nBIOPPE6lNC57A/ELXkaXJs9+1fUsyQ5eSFxZMdaZIHiPtR67mer7ZOCpREqNd7VKdLUzxqfvfIBb\nDy3QrRzeQxIkOIOXFa7MiYVDh4okeEQ2JF/tgtnCc8/7CfKiQ5YbjJe4kSMblmSDCo1CWY8NHqsC\nR4oRyxbSJEVLg1Ye5SoGy0tU3Q0i53j3u97F2695K01adf5jNmJqrI2OA3nZx4uSICqktJvRd0+O\nJ2Xkou7/oOhXBUNvqRB4BFJpnLEoKem02mgpWTw6z8MPPcbeg0c5cGyZY4uLHF04ziAvyU2gsL4m\n7coGSsfkxuIEWGvJjaXwP7SuPzmesZ3ltde8heNHjzC3fQdCavCeUTYieDD5KlVVsbK8xLDXpdft\n0ev2CQiGo1XO272HhaPHaUYJ7bGE0ShnbutWOvpU2kT34rM4865jaKsx22c5WA2YXUq47N/9Il8Z\nfQh95/dRpk8DKGQgIJEiINA0kCw2O+y69MXs3fsQUQkOj1CBIC1udZGdxw7xxN4HGI80Idj6Zu8d\nURhnFFvGmglzr76Wo3//GRIMoT1GPx/RW1pjCknhSoIWnPcjV3Hbl27ASag6EzWzWAhE2qEzOY7o\nLmOtJdbRUyZSPYKX1ExfRWRhFFJuOr7OY+trvOXFl7IjihgTgmGw2MrjvMMHS6IjlPeIKsPYgqR5\nJs/a+Qa+e//f45sGHwS2qpiYjOnmGc54tNJYEciThK/fcRfXXX4JiZZUwdY7XplTra1SiIooabFn\nxy5+6X//eX7/g/+N3efuYXn5KK1mA2cD1pon+zinmeydCHb1HgQaJ8Eg8KYi8rqmHjnqmG9val6g\nVDRkE98UlAZW5wccPnScOImYmh5ndmqcVjNhfKxJkiY0ACkFMkhsMCSNCaw7tfVw+njGoOPPfeqv\nWJmfR0iB0BopBY20SafVRgqoTmDpShNFEd7XDaq8yrFFRRIlHD1yGBEFLrzgIoaDIc1mygvveOvJ\n17n1+X/B8ff8GWasjS1KLv+vv86D7/s9dv70G1m7cR/9h29GqfqCeSdY1ZbUS6a9Zz0SxGedy6GD\nh7HCMmk9KRIrPakVCAmZiLACIu9ousB6LBi79EV0u30mDh8E77E7n0tx7FEKM0S7Jgvn7GH37nMZ\nfv2zdFONDo6BalA5QyEqpG/SlQHbjJB5lxVnWFcOr2PSKMZTqxCtqPUt9dGsDhPSCIIHIQNJJOgQ\n+JkrXsJZkUN6y5pLyK3EK40QMaV1qLiBb7aQjUl0u0WvWuSxfbexd+MxmqllfHIC5xsMVz333bWX\n0jZARUQi55dfdiUzShAaCb1hiY7HcV5TNGKkTkk6M0xu3cYgZDz4yL3c8+DtdNf6BC+xrlaAWl9r\n/Q+860noeOdHtm3unjWDIAiPkgqha1RFeIdSkvEkJjiLkgItqF01Eci4BmaClxTeUIW6hhJ4mp2U\niZkOMxMtkkiTJBGR1sTNmKLy/I+Pf/VfHnR89MhhjKsFWwKJdwEpFVVpEEHSbLTxroaGiyInz3O6\nGxsMR3mdex4Cs3PbGB+bYm1tg4DAngZmNHvw3Z1NXvR/vodyNOKu//CHXPXf/wvrH/kyw/2300AQ\nnAdbU0Te/Ae/x9Wf/BQbUcSsC9h9B0h9jb6NIkccAgJHEDlO2PoIgaOIoR+BPWc3c298Gy/76etZ\nNjl9MtaP74cYSCJGWvPKV17JE7d8HaFrsqFwhsTk4EqEF0TesOfsc8gHGQGHkaBF3XuqhVVPhsq5\n4PHBccJfuNrUu1hnKZ1nHcEffe6z9OIYLwVaS1qxRouwKfSq+7kuK/H5BiHPmEi2csHuK7jqwrcz\nq/ewfGSN1Y15XNnl+RddSCMWBFPhUNz+2H76xiKCpNNIN30CSkIxwOVDitEGy8eO0JJNnnv+xVz3\nxp9mz3nPxjgDKjAyGT6ETefRU0fdJK15YpGqXWa820RApcIGwcYwIzeO0nuq4DG+RGyyyVMCTQQd\npRnTCY1I16aBhWH+yBJ337OPW77/CHfdfYj7Hz3C/kcXWDk+/IH38dTxjC2WUVaR5RWVCSgV02yO\noaOUtNE6aYOktaYsS8qyxBhDo9kkiqIalvQeFzxpu0PcbFH5wHr/1LTie9eOc82b38zNH/00ZRzY\n0l2nTAKt/gjl1mtBkRI4bxEEvvbt7+J1G58myMqSWF+7qQRLyziKSpNF44x0iyqAcqJGaqwndZAf\n75LML3PrJz7NlI9oec1sUTI0gs7Q0sRyz//8FDMm41jUoZrcihIBEWxtSqHBh5yVQ48x0xBkCEpR\ns7Mj96QllJTypCrxRP8l+DofxftAEDU861RgNLaFz912D5VMaMUxY+0GiVY1tSeKCdYhfSBkGWZj\nkTAo6DTOYLpxHi++4K2cv/ulTE1Ns+XMGaa2TPPc8/fgpccHze0Li+RCYIoKZyyRsEQ6MKYh8gU+\n62HzLv2l46TENNQkV73ian786mvJ8gHNhkYp+QNomLX2ZFyFEE8miT3VPxnAx5qeN/SKgl5VUApJ\ngccEgXMeHwyaQCoFjVjRbiakUUyiEprNaRp6HFMK1o/nHDiwwAP3/wtlHQ+yjMIYyrJibXWd1ZU1\njh05xuLiIqurK2RZxmAwwFqLlHXI0YkPKknqAFBrLcNRxuLxJVbX1rnpezef8hpTk2N02571McNW\n49BJyi2/9Bv0mkMaZUJRloy8pVTgBwa++m2+9OOvZ7Z0mEhgde2UKULdrZ99x/Vc+v4PMPfjP4nU\nGoNHerBCUknB2HCJhz76O4R9DyJCRTeCJRU448pXU42PUySGxG9A1ORFv/Dr7Ln6bRgiFKDwaBSR\nVsgyI7IGoTVRq0EzSeujl3mSKCpOIx/WLpMC8DhZE02TPOB1yh2PP8FACiItULEkjiVaKjAOV1mC\n90TCIfIeJlskii3ptpjMR7z48qtpt3Yi5lrMnL2DsVaTZjNCOE0/TTiwuoSpHIiAdRWYHBVKUukQ\nVU416LJ+/BBHDu0njjSTzS2cNXM+/+l9/5Wmb6KCPKnhPzGiqK49T7AATngtKK1RSp2UFhTeg1L4\nJKYQkpXMsJo5CpEwdIFcWvJQUZqiZknYCrxHi5iJWDDdiphIYbKpaDRaNJrJ087ZZ2yxtBoxPhgK\nU5CbimGWMxrlrK13OXZ8hX0HD3N0YY1jCyscnT/ORq/PIBthTEG316dX5gyMI8iUm26+lW1zc2yb\nmzvlNZpRinCS51z9GlYbtbl4x0XIODBqdcifvZOXfPoGBueeh8Kg8py2kPSzghJPZDSonEomNJqK\n2bExis44US4gSIK2KJsRbAUikKqUyEe0TaDbsMQDuPwXf4UzX3U1PdGgbSOGiaTvA0v33csjn/0b\nTGFA15JYW5Z4EeFErX/JZMWu7dvYMjeLFA69uYtY506aoMvNRSSokUSCQG7uQl4FUp8ROglHRw4Z\nS2JhEA6ch+Bz2rHF25zK2ZoFPhrSf2I/7dBgbHqK3iDmiovfxkRzD1MzZ+KEY7bVxskMCsF9h46y\n0svITUXpRiAlAgdYWg1NLA2pN7RsSffwE/SWl5jasoWltSFveN1rkFj4ITtLCAEfajCCEHDW186j\nwSCdpcKQikBDpRAihJYoqUEKVgYjNgpLN6tqlx3rKStHZWvBm3YVo7wiK0q8raXSiYDkNF3N6eOZ\ni8nbZJMaZxHUd4y8yImiaHM30YDFR3Fd7KsGc9tmsTYjKIfxgiNHj/Dww4+SxJrtZ+ygcvYUX+de\nUTGsMo70e7jLnsXwngNsyQzLvsFkWbC8NKQFnLlzJ9kjj9Wnv+Cw3hPJiJUIQNAyOYUQPPixjzD6\n/N+SbAwRzhEJgWOCntdMaY0qB/hYYKQg9hFtnXDvJz7D9HPPpxR9DJ5WofGyYPStbzBGxvGxcV7y\nghdz903fpNw6g11dIUokpTMYAlVwTE5PsbCwSFVaXGXQSU3HD84hlDp5XBFSbnqr1apT6wJxFNG3\nA/Y+cZiLZ5+LDHVumncG6xyRhERAWTkSJcjzDO8Ua3sfIp7bgYpS3ChiariTu+/7Jt3+ccoyILRE\n2pKQChaHC3gxQVNJfAJRolGxxJqcNAhi5QheUvZyqrwLjQbn7jqLQ0e6tfnED2kGel/Xkh4I0hLF\nGkGgIRSxBuENthTEURPhDMFHoDwhKKQUVEFgfEQ+smgCcazQ1D0cH0TdqBTgawp7rWOX/0Kh4xP0\nDesszlsU9VGrLHKiSJ4kSSqVcuaZZwCBlZVlwFJZx3duupUtc1v51d/4Db7zrW/xt5/6O6644kdO\neY1/+MY3GHU3WCiGXP/atzIcOFr37Wf27T/D/Cf+nLlRxRff+S7apSUREi88Igg00J/dwqv//W/y\n0d98L53eEGE1TTegtdjDExhqhTTQP/ssrnjfr5IffJS9H/4QbeuoooQ4VLjIM768AIvzjOuc4bYd\njNJJovkHaJY9Mik589XXMpzdTn77bTTdBEM1xApDKT1GBKwWDDcBjjhK8b6WKcgAUtfGHScdO0PA\nB49SJ1IForqTLjQH5hfIzfkIX9se1YvJoWNJFBwj62joBK0UlQmYXp+h3cfEtm1Ixtgzcy5N1+PL\nB7+CrSyuNGjVIB7vsJYUNKxBk2BsiQ+GyNUqR6kUwgaUqJvIwRj6T+xn+6WX0Wy1GZ+aZnW9+wNz\nI4RarSqlhLJgttNmohWYnknZsqPDlmdvpTmeAi2+89X97HtwhcpnKCUwjprRHASogPGWyjmEMWgE\nWmriON68KYIWgPenb3A/MP6pMXlPAH1qmaIJIbzgnxuVVx8dAjPTUzzrWc9h8dgC2Shn97k7Wd9Y\nYqzVRmtFu9VBCMFg0EdHDb79jzcSJxFbZqbwzrPe7fFjP/Y6Dh86yJatpzo7/8TbfoJdZ2zn8cWj\nVD3L0Z2zqL2Hef4bXscjN3+d+OgTbO0GfCwok4C2aU3/14FQRgx0ysz0bsz6w3irWYstjSoAMbEt\n8UIR7zqDvNmgYSKElIwijwoO4RVGO2IJQ18SGUEYeK56/bXc8/H9ZH5AKpsMvnM7x7I1AsusLK2Q\nNBrsOncPS/sfxgsojMEJmNkyw6g7IAiBqyp0EuOecnXDpkyhfkhCAC9rtraWmqP9Xm3tamsSY6wb\nyE3zj4YMFHGCKXKazTbBUMuYM8docZ5Wsk7nrPM4+MA6V132evrqGMNkkdbkLI8+egfLSz2aPkE7\njfKBoqwI1pKkCYmOmIwbNIBIKazJsdUyy8fm6VcVMknxp1GVgvcIKRFS4b3l7KlpJgUkJRSLffqN\ngFuRzIztIo1y3vjOy1hZ6PIXH/weprK1CZ+QaGzt8SAlLkiC0hgfsDaQuwwRINUKKaChxKa44Z+5\nWKjrxytDCE/Vf/6zovLO2XUmExMTeB9YOPYErVaTJ11nxwAAIABJREFUs3aeg7UlOtqCcoJgA1WV\n0+v1EFHCP3z5a0xNT/Hyq67gczf8Pb/1H36Hr/zj17n0eW/me7ccJW6fyl7tNFrcctttRAraiWb6\nnDmESHjksx/HVbWJQalrnpAXlqFMmU4UovTE66t8+dfex7A7JJEWfeYO4vkFouAplMOpiMQpunfe\nzC37HiIZjBi3Bh00MnJ0t55D4/AByibEokGmAknR5c6/+iCJUgQf1SrFwTwaw0g2SZKAcJ6ji0cp\nZMB68M7htEA3IkS/3vkkgeDs5rHrSSOP2gmSkxLhEAJGa3RVQ60PHjvOCydblM2EhjUMQ4wJlvE4\nIbaBSkmkd4xFiqzZwIQxRsvzTHaGHLy/ZCr2FAcPMTO1hTPm2hxfXeAVL38NpfB8+2++hJYxbTOG\nlikhgdGwoF8VjOINto9NMSkhENdF99o8mWhzxo7zWFk+FcXUclO6LKATKcalIBYVQSpSnWAHgdGa\nYGV0jHZLstc9wvRUwlvfeRmf+Z93IrBYoxAqoFxN+VFC1tJkPEGJmiArBAPvEEHQqwzqNKXt6eP/\nTYF/+oHuDdQReWx+vXbz+5NReSGEJ4ATUXmnjJ07d21ChI7t26c5Y/ss3e4SztbFqsVThsDy8SHL\ny0OGA8G117yNV77uDQyyiMktu/nyV77D1m2zfP0b32TX2btYWzp+ymvs3XcvIYwIvuT42nGiKCZr\nB/KvfZ2tx56orZMk4KGftHn7Z26g+ePXYuIG2hWcuTZkxhhkGtHrrtT1gFAkool2kEclM8MhZx+d\nZ1vWo2EdHku+9RzO/fUPcNl/+jCRiBDCEKuALj3NAJW3uNlpMq2wwWKCx3qBCp5YKdaHA6wQCF8T\n/wgBFUckaeNkWJAz9immdqc638CTGfTG1l1pqRTfvPdOBiKgXcCoWtIbfAAsSaRrp34ZaKaqVnk+\new8+jsiP92iZZS5+5Yu4/9g+Fg8+RnbfClMHtjH/wCL3PHAPr3vzNQxlycYwo2sqcgsrZsSBcoXV\n4DjUXSevciLhUc4To9g+O83rfvSVdbLYU4cSSOFoBs+2ThsRHEbKGp2UoEWDdrKVIw8uc8/NjzLq\nDskHfVw8z6/87jvYds44Kgikc4gTxucCtKpDmrQUxFqiZCASIGVARgojn/4c9k9dLIF6h7hLCPGu\nzb97uqi8Y0/53R8alXf+BRfwt5/8JFk2osoL+hsbTI6N48qKfm9EXjluuv37HDp0kCRucPFFF3Hj\njd9l8cgRytGAs3fNMjcniVyJNkMOPfIA/dVTF0tZDKnyAmxgIp2idBVzV1xAq1fVBR111xutSF1g\nQwYuv/LH6Jclg0jghGGt4bGVJi3r862NE9aFYD1q0og6lLGgTD2FDAy0oogcUhtmRcWhh29hFIo6\nWwVHJRwDRigds72xjaoyjLylCAGLxRiLFnXMhNX1BHdFVVPzlaA1NbYpA9ZILakqc9Lp5uQF3YRZ\nT/DH6h5GQIbA4UGGn+pQbqzTbCqUTvBOEKeCiVYDIVMsMOgdoxXWESuHCPSpJgvyxLIyKnmkyvny\nEw9xw76b+NJj3+PAvoOsP7TIA3ffwfjWSVwaUyHoZZYjgwFrbcP9rPIIQ7679yAb/YzIebJRH60E\nx48cZktr4pTr5nSAYDZTlB2NRkQjrg0qjPdYWzDekbzq6st52SsvZvdzzmXb7j1Mb3s2E2du4f0f\n/I80GjmKTWQuOLw1CGdJtKQRKSIBka6FYomSNHVCov+/gY5/JIRwCfBa4D1CiJc+9cnNDJan4838\nwHPv+5VfI007/N3//QUOPHGUlZU19j92gNwG1vp9Dh46zGg05GWvuJKrr309Dz5yL89+7i6+8YUv\nMDfdYWpM4aoew/V1dPBs2zJGdJoXkkTQTBtEcYw1JWdOTDB78YUg2xgiqqA4kErGRYOGN/zd9dfx\nyX/706Q+w3hOevY3jCbO6y6xQ/DSX/k/uObDf8JgaiutkGKFAhchraRRSZoL89z2yz9L94s3kDhL\npCTD9jjn/uzPo2JJLgTsfQKV1kZ5xpYEZXFaI0JgQK21iGWond+tR1PTPNCyvltaTxxpwBNE/Thp\no3aiaXdysbDJr9L8wV/cQJ/AysIiiR+iQkHhAlIUhJBx9NhBijJn8cijrO5/gFBuIGNBU2iqhRV2\nbT8POzbJfNJhrxuxsl5yyc7nIvKY3kZGZkoKA3npcMJRaEtpRxhTkeWO2+9+iN5gjaga0D22wM65\ns7jo0gtOvW4+0NIRqfJYU2GsA2fpNBPSsYTQCjz+2N08/sSdZHaN9f6A+eNdyqqNjrZwfOMI//rn\nfoGgA0kkibUgitQmC8DhrEWFmvZf5hXr3T7r3Vpv83Tjn1SzhBAWN7+uCCE+R32s+mdF5f2b699C\nXpYsHl8k7TRptcfY++h+7rrjDgb/D3NvHmX5edZ3ft7tt9ylbq3dre6WulsrsmzLWyxhYxxscAwm\nxBiDk3M4h7CETEzCBDJMksMAw4ThzCRxgAMGQzw4hASwWWxjbAOyLW/YkmzZliVr39V7dS13+23v\nNn+8t0rdshHMkHPE+09X36q6dW/V+7zL83yfz3dW8dIbX8Kx9YNs7W5z3wP3MFjSHB4e4dj3vYXz\n555Ca8XuOPVCCCkIwTIcDi79IVGQZQbvO0QueGI2Jm86OtOyVAdmJw7w9//j27jzB36M6fQsG75E\nRo2XGYUPTAgMraBWligVQXr6Nk3kc7kmrA4xp86gVUZUDUZUdAwJ1lLGLjl24al8yfPe+L3om9/I\n+Hd/h3W7y05fojpFPLpOdeYphlFy3Te8kodu+wyVEogQsTjWeppxN2X14FEIJStHDvDwl+4jRyaL\nCSCGZBEYQwRxcUPc01YQRIdRmi1K3n7Lp/i7L7ieA0sK7Vs2t04StcHrjMwEpl1HJLI1Ps+5s6e4\n/CU30PiGEC1XHTnC9ctPce2xF/Ar7/sEy4evJWrB+tIy60sDwgHQaLY3t9g6m7OzPWFQ9hBe0imo\nYuCpzS0OI8miph4vc8XllzKGNYZcpTS4jolEaXqC/kihRjltHrEOxhcmdHNLORwgpOJc8RhXnThB\nf7TFd/7Dt/I7v//fcZOOtvPIqBZHTpV+Rz75kS4vLaGMQucShWZr51JI+f+nYBFC9AAVY5wKIfrA\n64CfJVnifR//P63yJlVN6yxHjh9nuLTEbbfdzmu+9e/xu//lXVx11fM4fvkGRw5dxsaxK3DacO/n\nP8cLbriGJx9/kLPnnmI6aVgerZEVgq6r0DqjbS/dWbKswHvIdMGsmuM6Szi9RacFq1rhz2xTTxv6\no2XkNMMicApCsFgvOdvPCC145cAICmFAwu2//A7ycoW8OksrHYXXWCmYRyhlAisgRZKgoBiECfd+\n/I85gqNwLYUX7Fx1BfGs5eqXXsn5D58jC2nib2uPj6AQZFnGddddzix4Aop63iBiJO8XdNMGnEca\nhfMBpdUlJqh7AEJg/+MQIlV01CHn3Xfdx1BZvv1Vr6CwMG9afLPDNSeOM3M1VRU4eW4T3VgGIpJn\nmsZblvuSW794P194YgtMn8NLBdFJvAgoadMxUDasHVzm6w++FJdHOtcwOT/l3JNnmdcVp2Y7eAkH\ntWL2uGL5mqu52C3EIcjKHoQ67Y7ConKF6QvykUwk21ZyWK3ROIubN0Qtme+eZbL1ZQ6sDtk4MCK4\ngIgLpXUIZKZA6eSPoWJO6yxV3ZApRSYXjm7PMv46O8tB4L2Lc7EG/nuM8c+FEJ/nb2CVd/99D9BJ\nOH/hArPphH/zb/41y+trvOZbX8vx41eydX6LzfEuG+YYs8k5umaXj3z4g0znU2KEldUB8/kuB4aH\ngVRbeKYEZDqZpYyQc0gfaYNl0Ea+6V/+Kz77ex9g46FHuf1HfoSN2uMxBBJDNwZBfPlNvOEHv5ff\n+4mfxbtTeNnDe0mnPUu2oqznNLGhyjQqtBTf8Dqe94Z/wN3v+BXCow+lFyAgykgtJYNHnmD79Nsp\nvGRcekIrecWP/y98+pf/Lash4oRl8vBj7MYWqw1ZBCEFTbeFjIZlPaJpA5PYcv0LrufOz92JQSOi\nIESSA4FWl1hwxJjQRt77BABREeU6vBLUUqNkzh9+8vMcWVtjd7bFtz7/emTXEWyHlZrpZMZ1R1bp\nSU2n+ui2pV9OefGrr+LL901xjUYGh/QCpEGIgJB7vGqLio7SQRCSlctWuOzgCCGgax3nzoz5yrmz\njJqAWF2Hi64tb/rut/DwF25jdvZxlAmILJL1Fbp06EFA6ojWktAJ+sNBauCK4GuLm57iyIFX4F3B\nYClnUjVIFVFaEYIlukToFyiyPGNeTSn6JVoE/ioB/l8ZLDHGx4AXfY3Ht4Fv/ku+5+eBn3+255Wm\n4I5P3c7qxjqvetUruPeee7jry1/iyitPcGjtELatWT+4zh2fupUnHn+crmmJQN/0QEDddqjcMJuf\np+wt0XWeumou+RlNl8R4TRdom5a6nTCbNHz2vX/K1/3Q9/LgT/80y00CIETVEoPGuAqtCy6Md8iU\noc5AdBb1gmuQ65fBR26hzTQEnwyFosfqHlff/ErOD0csn7iKM08+Qa+NTDNA1gxbRZcLZBOI0mF9\nn+GFTSZ/8h5WJ4HzwTIUhs0L52h1JI+pSWmpn7O0dAAVI67zeG25auUQJ+cTjh8/wsknz+C8QWHQ\nsqWzPtEaYyBquZC9pIUEQgJsyAWlXghsCHQCHtnZxpnAXacu0D+aUVVj7txtOL27ywtvsIT6KKYH\ncRrJVnZQasyVRzTLQjLoGZSyBBfpnESrVHXPqfFEKJexTYW1EYencZbRYI3Lj5UcObRM5w2b5x+6\nJFgQnmNXHuOeMydpcfRKhcgt+bCHlwJvLZky+Kwjyg5hIkYojDP4eJbV0fOS/0wE1fMc6OccPbbO\n8lqPldUR11zzdXzwT+7l9k/fxaEjfdx0hDO7yL8iHJ6zCv6dX7yL2bzGlHPe8wfv403f9UbmdcvK\n6hqf/vRnePnLX84HPvBhXDXDZBlFWaRjjVRkWYa3Ch8sSMMTT51GKbOw3nt6PPrEk2itcd6ChywE\nQhbJzj7F53/+51h2qckpSChDxlY+wJsKEwJLTzzB7/2zt7LsBXk25Mbv+sd0oyUe+8qD5KcuYE1L\nI8AjUXbObb/wn9Crl2E2H8OZGcEJMlHg45BZ5sh8wMmARlIRUHbG2S9/FhkDayurbO/s0PQzYrug\nyuuICx1bm+fS/53n8KGDSOVY1oayt8z2+bNEb5hXXUqFGk3rnk6X7rtvXXI0e3rsmxHESAgFX3nq\nHKfOnCNoz1j1qbTn1HpgmD/Ikt1hPpS44S6rKjLIBGujktPjuxGxR8aIkRmCU8hC0+JxQrK5O2Yy\n32Y6d7Rdi5ABxSn6xrCxvEKhJb380jyTcFtsHFpFj/o0s22cgCYKxu0c00s25WLPTl0K8JEoA0FM\nkSiG2dV41/Jj/+ub+f33/i7KBJzdYnO8zdb4Aibr8atv/01OnbzAT/zYD7LrxngnWF1bfdY5+5wF\nyxMnT3P9DTdQDgZ84yu/Gx8C993/Z7S15fjlR/n99/zBQu8k6DqHx5L8Wix2XjFvG3qDPrPJDlk+\noK5bXF3DxXd8IWnajrZpiUowlJoWQZjscNx5JnsTRUQ2Vwa89hffxp2/+du0H/sYVjasCcnpQuEQ\nfOQn/zUHXvpSzu8+yGGdEYRBEAhCEAwMwxRxviIoi2l7RDLCza/k+u/5Jzz46/+e7N57mGpL2SUw\nXBMrSmVAa0oHVsNMBhAGHRVCBI4fP8oLrrsCJaGfl+zs7OCCp9qcUBZ9br75Rj720dvJ9ADiwrBV\nBmQCSxHCxccyj9JPp5VTEMFoNGJzc5MYNFVUNCIilCGrPLlSnDs959D1c6bmAmZkaGgojEa5hhAd\ng0FgOpsx3arZPPko12wc4Yl4Ad8LmLU+Ki+ZCMWjm+cZz2qMEpRZztGjB5nUu9idGQdHlwpgZbXF\n4ycfZ+OyFR6+/wy11wgrEGQEH/Ex4G1INh5aMZu3KJ3R7weOX3Gc4BW//o4f48GnbgFjMYM+y70B\n4x2HazMefupByB/hxIkbePcffYh/8Pdfi991bJ7ffNY5+5wFy4kTV3P8yqv5xCc+yYMPPsqpk0/w\nQ//k+zl/+iQnT55mNqvo9XqIIIgxYLsksmyrmrppWN1YZ3c8oZ23ODelbR3WtnDt0z/DBehcQOU5\nMUQaEcjmgDPMTSTUHkwC1/XngnFjuOEVr+SOz3yGLgS09VTRkSFZKyThjvtYKQtM6NHqjpiqNEQv\nF/awHmU13jQQW669/uuogma36HEogBWWXjCY0FAJjY4FLgiyaUPZL9jWqZHKEdExgeG0jnTWMx7v\nsr40YquuyPOS0WAEIfLqv3sTt/3FXakPXwUKndEFj+va/R0l3Vsi/X6fpmkWPTFJ3Tse7yadlPXo\nQoMIhC7SmBYdSp58xHPN1Yq8DEi1TSn6OK/x0iN8ZBAlcxeQTc1y61nrItsjSe+qPnGtA7nLss4Z\nXV4SwoBev4fODF3rsVi6qs+ZR3YumRuxahkpSdnTvOglL+TkucewoWHWRro6QAhkLunj5rtzpMlw\n1mKdZ2NjhR/9Z2+hXD9LP78MpRRnNhv8WkQITSCgdM0H/uz/5C3f9k6EX+Edv/5u/ukPfBczf+kx\n/pnjucO3di20nuuveR6rqyvkX38TD375LmaTMdniyLW7u8tgsPQ0qjV4qqqhaTvq9gJVNUcKiVtI\n1xN47+khtSI0yeVXCYmMkkJ1XMgtq0S2csG3/+i/4P2/9HYGfsyXfuSHgQg2IrXHeIMIEi0ColNU\nWUXPGayoaYLk1T/5s7hS8ic/878ztI4sKBoVMWGIjp6HfvtdbI/ez+rWDo0M5E4Srns+R190GY++\n58tc9xs/x31vfSu9kLHU1GwWkagDhhwhHf3+kCha+sOcyc6UaZQcuOIoL3nlyxmNRhw5foQsK/iZ\nf/XvuO+eJ3A+IFwyVXXKECWp+q8kSknm8wS+Tlbckkyr/VpMlkHEYbShix150We1v8qrX3EloT5J\nVE8i85KukjRzC0IQWmiaDtloZOs4cOzr2C236B1zLI0qRoOMkJXM5g2j1bSgeNUh+gUxX8O2AR00\nV15ziFvOfmn/73bv2OGDQF+omO3s4jJPPkgNcHLeoZWkDQ7hEowxdh2dr5E+4xf/w8cpS4j+ALOm\nYXPsmcwV2+d2cF1ASLji6gGf+ez93Pqxb2Jt/TjHLn8Jx68/yBNfuTRonzmes2BZXzvKV+5+iCOX\nH+GBB+4mkxLnkv9GFC1SKrRWyZPDOZxzyShUmOTX0XX7JMPYdckp9xnB0jYdRmvqpkEpCc6xpT3Z\nZUv0H79AT+Scvupajv/Q98Cv/Dd2lMOKSHBQBMtEe3TQBCWIQmCEhugIQmDJGPcN2WCJVhVopjSi\npt8mpXcnLWXruPJJz7g3J6dgmh/kJT/844x1R/zgFxGnngAME2VoRUBXXcooLWolV1x5lNe+/pUc\nOnyU5bVVOuvIyzJ52xCYzyYUWc5/+H9+g7f9X7/AR/74w+BTy3BZZtB2+BAWXLWIEKkrcb/DfGEC\nlWpVix4YF8lkxjfd/DIOH7iCXDa0taA53xHzGd61uMonkEQbcTNLZiUezQMXHufqF2asLWsGKzn9\npZy6c6yu9rE24oUkCs1cJP5xr7dE1U4Q6hnW3ja9R0RA9ErausY3HjcOFIWgMElN0TUWqQ3zqkbo\nEm8lPRPROhBbSxfSMbmxlsmFSOwyfGzpDQPTSUXZGzF5/Axnz93B4SNrPP7AeZ5tPGfBcv/9D3LZ\nZYfY2TlHZiB0dqHTyZBCY3S2sEtzqUtuYRudXKWS7qltnwZKd12HMZeqRqVImE+tNVmeU1VT+mh2\nDg9R44b1KnDHj/8MZj1juVDQOISMOGnZLZc4ePNLefjTf0HmWkTs4ZMiD+0dB/2EL/zs/0ZjFEfb\ndP4PWjMVJV7mjLQjdjN2+xbdWRopMT5y5/s+yPpAo+KMT/78L3BQ1vRe8EIOLK/Q3vbn7IiQahal\n4B/9wPdw4OBBhEj96LmBaVtRFCUxBnqjZaLz+NDyoz/xz3noK/fz5EOPYazCWcvQ5AQl2K6m+xd5\nIcR+4TLVXtICoxDkRYFSire85bsJXYVGElxBUWxghObs6Udpmm1KUWAbu2jtzvGdZB4DXR7IDiiW\nlmB1LUfnisIbrA1QKKyPNDbQOIt3Hc5PIDbY7tJJat2CKyA15MvkCupqewEql0Tn8Ca1mPk24kQf\nomF99TK6+nHKfkApyZLWqV1bCZb7FhkVQpUoA4hlvCvxUTFvKh55as7LXvF3uOXWu/7SOfucBcu1\n117JqVOPM1rpQ7BPZ2sWvRlt2yCESDC2vR5sKWmaNlV2MQuebiLA7/m0XDw6awmLjkLvHa23lI2i\n6RVU1x3l5N2PstSeZunxnImK6IWQzhnJFa96Heuvu4nb7riTvk1ixzIoOqDrL2EcHJhLdk2L1wYV\nFFZobvwXP4o6fphP/7uf47L5HC0iUWY0mWPoW9o7Pso5UTNmjiFjrjyxrzm/eQYZBCJTCClBBkYr\nq7RElFwgeqKgNH26xlKWJV09heCZzicsFX1+9Z2/xpte960415EDXdcSFCyVBfO6wSejToh7bl9P\n16WEELz4RS/mhS98Id6DyDNEbNBGEHyCpx840mfrwlm2zpynrUSyNfCezitarVhd36BXnKEoapRs\n0eRkRUEsA3Vn0TEVTvMQ6byjs+1iJ7xUZhIBIR0uRpxSaLXEQOd07ZzJeJuitKyMepR9jfOK6DRL\nSxssLx3lQn0KZSzGOFbLIcsqo/OS6awhErDR0FmNswYvMoQXuOBoq8Cj9WPPOmefs2DZvrAN0XP6\n1FMsDYYoo8n7vQQqiGHfsTaEBB/Y68XXRtE0DXHhhZhniasr5dcAH/hACKmBqLYdpcqookU1lt1M\nMbj6ELuPn8W1DaWUqLiYTA4ev/UzmLWcoYdoM7QCLzzLr7iZq974bdz1rvewdd/9qOARUUK0NMKw\ncug4lR0yD1DrDmKJloIyZNAGfDEh+EBfZVgRqK2hvuM2diUoWl50083c/vnbUViyIqNjRowCGUBL\nQ5QlRSZp6xYhDHmZoBOzyQ6f/Mifc+TEcR75ykPEKHHCE12kUBnlcJkL05qIx7u4uLcsgkUKvu8H\nfhApDMjUteqDR2OgnWO0xPYkKMnho1exceAY7YJBVo2n+CIn6/VQypOXY2JWY2OLCQFhI6LI0DIl\nXIRwaKlpm9TcF5zEi0tNhKIVeNkg9BATQyp4lob1VclllxUMlxuij9BJ8t6IY9e9mNNnx2yebMgL\nie8aulowljlKecp+yZoqiQKC6TGpArsXGrouMBiuceHCeRSC1j47N+w5C5YYLHmWsbJyFOE9jsju\n1vY+2WN9fR2lFEVeUFUVSqlFwETyvNx/bG9Yay+Bz0Ein0gpqKpER9dK0MsLog8opdkdZhQ3HmP2\nyGnM2QlGa1RrKbxiw+1y8t1/hHYtjdTUymFazyOf/yI3vOabyVdXsF3AFxErA7lXDKPg/T/9kwQd\n2QhzoshwImc7BEJ/wNA3iGgxQlL4SCcEXWhQUaFthzKw8+g5FAqnUi1B+wwjVSIn1g33fekTfP7z\nX2Rrc4vz5zeJPlIYQzWf0xmDjYGjV13LIw8+ntLJUmKdxYbA0qDHvJrhYwdi4bYWIwrJNSeuY3tr\nStskcHYVxujMYH0PoUD6Fk3iJ2elRGXJYGi4tAK6Rys80dfEUBK9ITiPwyOMQziJlmmqiRCQPqCC\nTBwBXyfLw4uGEB6tcnyEzDikiBw81HHoqEWomtbVmN6A2bxjtrnDePoZBksZl195mEwuc/5chWtK\nzm97DmzkCNHSy/soZZDaIEtDk4Gzkel4jJSCxkWa6d/SYFEsiIStI5PJTHN1eSWxsELAWsuFCxeo\n5u0lwWMXGNNuccFvmgbnHHmeY+2lF0XnPM61GGOS5EMIrLVkUiXH29mcqYB8vc/w0Do74xmzR08z\nMBmnRcvBKjIrIiYKZg4OSs3VXvDFn31b0hIVHicEvaDx0qJixyE/RQVJVJ5GRZT0vOLHfw653OfO\nt/8S4swDSJeTR4gaGgQqeMBSuwx15glGhWDaCv7zv38HUXXs7uzSzwpCCGxubTGdtDSVJfrIaHWF\naeOo60C1M6ZtO86fPcWBtTVOnUkwcWNMUho0c4ZlwbCXM57VuBD3W3g//7nbeNU3vBoRJds7E+Tc\nIxAgc1ywoDQIhyQdd4UCrSRap0VMq0h0OSFs4O05vJY4HEIojNqT3yh0CGjvEDbQtTUhdOnYefGQ\nAaVTEsPQcdnhHa48IelcS+dzCGs89tgYv93RNYLIBW540RHGuyepK8+RYy/gngfuwxjNzqQhWkMc\ntvQGQKcItcdIRXQtWJA6HeeV7D3rnH3OgkXLuHC4VbQxmRdF69IvXwi0UoyWluj1UgA0dZV4YURG\noxFFkSX6PNAFi20c2lxq7e1lRGcZ3nqG5YCt8RaZNnjhCdEnYIH3iCJjF8Eju1sUaznloQ2iEJwc\nV3SbuxyYe5Ca1hh2bJXaY+PCFru3jDh0BdOtCwzmEwQBJx0yRkSQONnRbNesZIHJdJuR9wTl0T5B\n9jopcBFsNFgBuevo2x5tB6efPI8ZKoKD2XiclMhO0e+NqKotatcwPfU4vX7Bt7z21bzyVa9m+9wO\nH/nIrdx7z/3MahISl6R7MkrjO0uWGQ6sjJhVDfO6BqH5wIf/lE/9xWf4R29+M8eOHWH9wAm2N7cY\nj6dkUtH4GVEoVF7s684ApNLIOEOLgBWOyCHa7jSRc+ihJHSGIAJEhQst3kViULRNR6gjKHAxJBDh\nYkRliUi0gkEZuOYqT17WxLZJkEMEV11zjHtufYTJ2BG1YGfcMGw0KhOYYclV17+Me+/6HL6T+MbS\ndIYlB0YEbJWzu9Mwq1P7NTLSK3u88OY38NFkXgm/AAAgAElEQVQP3PKXz9n/0UHw1x1CQK9XJrfZ\nmLqfpZR450EK2iZ5kiiVLvzLKyOkTIacbduyuzshxkhZDsjLbHEMu/SC733Yl69P5/PU1+I9eZZB\nEEgkzjmqqmFaNfT7PdZXRtjoEFowOL7B+uEDzO55lG635sSRI9QnzyFcIGYa7S2tc7z0rd9PowQf\n+6n/g9VxTWciMshEyg9wz3/5NbzqWOkalM+ZqkAXPbtCclY65sHTy3KyCNFIOhFokZza3qWcWbTU\nlGXJkUOX0QXFdDJlaeQp7YjgA+Pdc/zZhz7K7//hB2nqgPOgZI7QCpMXCS3kFyA+wHYdzloG/T6Z\nyZg3FmcddV3zrv/6m4wGA77l9W/gxhfeyPpan7qqubAdaa3FRUX0DhbuXsl7MgPvkUYiYg/RHiMI\nS1tNMRo63y48RMEGg4+RLC+IUbA7nSeE0kXBopSk7PUZDg0njkV6g3NIodjor1NZgWwDs3aba1+2\nQjPXCJ1TDgST0w02dpx57CyDpcMUvWV829J0FqkkIXbkAqpZw3g3Musi6IxipFheuZFKPKPF4xnj\nOQuWLEu/nbIs96XldV3jvKexHb1ej/l8jlYLR2LnaV06kvWLkkHZo+06kNDUDW3bMpte2sudS0Ug\n0HUdUis6Z9FCMJnNEkxagJSKyawCKVjfWKVQJnmGAE0XUVlGcf1VNHc9ygNPPcUhmWGUIkSHkILM\nCj74f/8qL3rFyzBNg5UO5ZPeLEgIPjIMW9Qu0klJqyVzItu55JyMnPSWLhMY32C8IDgPSmP6hsHa\nOkc2hhQm58H77uOWez7OvNOLNLlGSJM0YFKgY6SJCiWSetr5lKvQWY7Uhmb+9CVaCAExUM+m5GXJ\naFBQ1xZJIArN7rzlj/74fXzozz/M61/7zbz4RS/mxInjbO3usr0zI1oWZB6/aDg1KC2QMiP6Di2P\n4zpBxyZN3EQUJYgG4XO8ULTW01nFfN7hXImPlx7DlpeXkTJxzXrlDkVR08tXUSan6KAw0M8d3bCl\nnmsm0watlxgdvgzfrbJ1bpezJ8+yvHKAJx59mEGWAxrnBU1wdJUjxB7IiCwGkK+wduQmmtB/1jn7\nnAWLlDJ1z1UtKssWl3GZLJ2Voq5r8jynl5lFKrlFi5SRwSdqu4gxNVplhjIz2Gf0cteTGUWvl2os\nbUtW5MxnM7RQdHVFnhdMtndYWhqRFzrZl8aICAYpPaGxzH1kMBwQVpdotsaMW4uWGUYIHFDYyNr5\nc5z88B+zZDukl3Q9ResdndJoCy0CKyIzGZmJQJCas6LjCdcSVI72giCgUwInLFIahIU77/gStwmP\nVgrfWIIL6Mygsz5t8MgYFhBKgYsRjSSGZPgToqcNC56yUCyNVuiqOd3+vS7B5WzTIrPIoFBYF6lt\noHaOUpS0uxXv/+AtvP/9H+IN3/Z6XvKyl3P5sRVmdc3Wzg51XSfWmlCI4EBZBAahBDpcT9OsMJ8V\nZK5GZpsImYqEXRCpLTpqPNDM1+EiDWOv7CNVMl3q9Qy9XkahwRSefplT1hnLITJ3JTbXFAjOb3bY\n3ozaO/zQQOYIUYEssF5Bk4rZifxZYEyPPPN0eo0TV72Wshjgnnl3esZ4zoKlahsEirwcYNsaLZNB\nZowRLWHYL7Fdh/eWEAJFkVHXNVImV13nHc62GJN2qRgFUlxalCyHJfNqRt3M6WzH0miIWUhth0tL\nzHbHaCE5uL7BdLqNUoK6nmN0QRcizgdsW1EFWLpyg9C11GHO+dDRGy5x9Mgxzt79FVZiRFiQIuJ0\nYBYzXv+2X2bWOf7kp38CbRusi1QiMtWaR9qGcR6xWiKix8k94ERyFQ7B4YHdabKv1CqQK50kKSLt\nPkomp2cUONcldbVMzsJhgTXVimT4JCVKCJbzFaaTKU3XJPffVJ/Edx2dSiwtrSKFU9RdQxCS2XyO\nlIr3ffBDvOe97+M1r3kN3/ya13D15Yep5nO2treZtQ6vFUIaoooEmSDho/6AQJ/J9FG0j2SmpovJ\nhrtuBVXXZzobUeTLwNPo3f5ggJSetnIYE8j1FrmUaCJC14h+sgnMrWReB2ZlZLAC484jyZHaE6qa\n3PToFddQzWYEBXlskjVeKGkYMhiscuTQlehsQAjpPvZs4zkLFgiphuI8hIgqDDKCW7S/xhiJStM0\n1X5K2FpLjG6fWqmVRghP1yXruGcGS9e0aK1YXVlNF++QvrauG7YvbDEoSg4eOMDuzg5CekxeYLLU\nLOVCxMeIyTKEEGxOJ/SvPsCgDUwfPcXubJeTXxmzKgIiM6goUU7QikhnCp588DHuefhBmtZhVGBW\n5jwVLGeqmqY0NARilAmWt6ishwUEHJLL12Q2ZXV5gJYKSSqsLjKwRB8RCxewxP5Nv0+lEis4iSXF\nvoe8dYG8ZyiGPUIFXd0gL8Ioee+p65SON8ZgtGE8b2jqBpUZvJdkWcYnPvlxbv3oR3jxjS/ida97\nHZcfOYIXipNnTlO1LRIJIkOqiA8tg9FljEartN0O4/FpNs+fZnuyS5AFK2tX0R/2ybL8EkrDyspy\nUpfnmugmyb1aO4QIBKuJ0SJNhs4VzkvKAqrWYoKnCjVYRcDSeE82uIJ8eYAIks53hBAxIadUGxg3\np4w53YUZxeUbxPi3tJ9F6wxnU5pYGU1b1QsZC7iFuG+PtBhCoG3b/Sp/23YURSJxRGA+q8iykqa5\ntBLsncPajmI0RCi58GVMK/fq2iqDvMRaS/CBECPNvEJGjQsRgqdtW8oyfU2Wl8yEIxaCK296MQ9+\n6gvkQlNFjw2O3EpykppXTy/wuf/8NjolGCjHAz5y3s3ZUpGqr5P7VEivPoiYGAIX0eHhaVp807QE\npwhSorQgOIfRGinVQkOWjq8pQNIiopRavPZiXxrkpV9o5BR5v0cvL9jdTsJBIQRGiKSfI4K1BB1Z\nGpRopZk2FY1vSQEZKDLDvffdzZfv/gIHDx7in/7wW7lsfY2sKDi/dYGd2RgpM7oYkUoSXU5e9FnL\nDzFceX5atJzDxgEej/cWLrpu9ns9DJFGS+paIiiI0RPDAvvrOoJzxKgJUaJVhpIeGR0qKqyVaCSt\n93Q2A1YwWYbMMvLcsGZyPvPxD7C7+TC5chw4fDmvPvqjqL+tnpIhJkFf8CGBzxZSchcdxHSnaZrm\not6LNJGEUCDjIvsF1nb0ByOMLvatKvbGoNen6xRlUWB9h5KaST1DKU1mMpquRS2OdN4F9u6ZYqFk\n1lrvF0l7ZomVkFOHji8129xbKHptQHUtAxlYNZ6+NAgEIY7wtJxp5mxnEGKBNRKPRcWACBLlwCuw\nkr+Ug5jI+REJ5IXEWY/JdYKvioX2bdHkJaUiXiSMfLqIK7HWpqAxGh9ShlApycaBA4wnY5qmIVdp\ndyImywpsJOpIr8wwxZDZfEbbNYsFTCGEIssMF7Yu8Iv/6T+yun6AN77pjRw8dIi1tTVmdcP2dErd\ndkjZ4oVE5QP6eera7IIn6B7eB7q2uSRYyjyjq6bkqoedjwh+B8ec6CJSdgThkVoQfMA7BU6Ab1Nb\nuMpQQiGCgBqEjXg6ApaslzMq+tx/+ydx2ycZqhal+6wfeD4ipKP8s43nLFhsSKu50ILOdjgncEIh\nUMRosU2L1hql9kDhyVDUuUgUHutbQCFNn/FsxnAJdHYpBV1KjTKauW2JOGYLJtnRw0cgpABtvUvC\nQrFnfWARSuGcT1k40ufaWQVlQetDEh02LvWNiEiDYleJZKYjwNKg0MiiSAtBDIgAIhqChCg8waRd\nUcZkifdVmIIQFxbXEaUDNnqyLKIWNgxx0RiXK4nJc7y1dNEioifLsiT/iQLrfLpLiNRvExEgFFYI\nbIxkSyNillPPZmghKKRCBU+Iks45gnPozLDW71NpRd11SCVweGIQyBiZ+Yrq/BP82jt/haXhEt/1\nprdwxRXHufLgIZq6ZavapfMSJXOq2BAUibbiBRIw6tL6WK41WV/hnSPqQ2xtbTMatBjjUTokTZoL\nEAzBBoJr0Ti0zBFiTiZLnMsJdgmvesmWQisyDI/f/SXOnrsTKSNKpNT/9S/9eoIu0PLZuWHPWbA0\nzaLRJnqMyTG5QUhF09SEBa+3bi29PEeJlC6VRGxIBUyhzKIprCPLMpq6/SoI+nQ+BiWIDqq2Yjqe\nc/z4cbqmJdOablH9B9DG7B/99lpy94a1liwrsN5RFAW7kynBSIILxChoY8BKCcqkFT8mly4f9gSQ\ni6yUTDtCCAvPSPiq3RAWiuDF58MCWC2lQQiPi4lYIoVM3vTe4ZoGqZKFXCQVWiHZSuxJgkJI3xvj\ngrYfIgsBMmiNV4kr4BPNAReSE1vwLlFxIhRZRlZkhBCTO/SiMBlaT1SpwDcbT3jnO3+DwWCJ73zj\nm7nmmms5vHGA1gUu7IxZLpaYNg1d5/Ehoo3+KgNWIQ3eFZjcIQuB8M9jXt1Llp3CZGoBGpS0tqVx\ngtYrgkwsNYkmBkHVgotJilPokv5wnbNP3MtD99/CilDYYLE6Z/3ICWQ+IAaF+CvI4M9hUVKRZZKv\nf+U3cOOLX0ie9fj1X3sHyIxZ1aZJvABjC5Hk2J11SJURggMlyPMCt1DgNk2DdZcaeXo8Uihmdc1s\n3rC+sUFd13RNi9c6+TAujHFiCBRFkVQCMWKMTpPCpRaBrusgKuZ1xdbuGLdQMzvCopk9ketjTIjR\n/Z0iLly4QyTKNPn1RUck4GveVfY+ThKd1MvTKzKazqNJzl6CiBICHxMD2QePXiRHQggonT3dhw+4\nxeuMLmIXRUi78KE3RpFJiZES7xKLLQaPiiAjBOHT40phpMLkKd1vbYuUOhWA8YQIAc/u7ha/9V/f\nhVKa73nTm7jhxhdxaHUFH9OkO1dtI4XGB5lUzReNgEHnQ4R0KO0oGFFPK6oGyr4nLzxdmOMD2A68\nVXTW45yicSWTbo2g19D6ICbrU+YDdk89xEN33cpSptAuEkWkMz1e9A2vofYBJec8cveDzzpnnzMz\nIyGgszU33fRinv+Cq7n+hhN855u/De8agguJuOJJK3frUh++D/jo8USs9UwmU2xrEULSWXtRDSEN\nk+fUbcdkOmNtdX1h5BpRWqcs157l3MKn0VpL27b71nOQJqzzDuc9znlAUM2r/aIcJC8UAgTnFyA3\nkR5drPSI9MfZS/nsBcTXMr+9OFD2jmfOp+a3uukYLg2wziKUJAiBExGZ6X2H47TIpCMrJM1TCDHZ\nxvlA21jmVQ0iMhj06Pf7FEVJMRggM4MTApkbvILA4mIUA0qKxYce27V41yFEJM9NcuNSCte2eGuJ\n1hGjp7MNTVvxvvf/IT/1k/+WT3/y4/imYnVYcuWxKxgNc5RIBhgXD60zpDBIUYIoiTqjHF3F2sbX\nM5tucOpsoGoz6lbROkPjMjpfUMclxtUSjT+CNCcYDPuMRn06u8M9n/8QQ9kho8aKiFeS4eoRVg+c\nQKoeF86e5CtfvPVZ5+xzpzqWnkwbbvnwh7n2mrfig+fmm2/i/i/ezV13P0bdRpA+EfSFoG1bsizD\ntvP9k0uWZRiZs7m9g19AUS4eFzbnVLZjZbhGoQ1N1eC9J89zqrZD63SRV0qhFl2Zo9EIay1uL+Xq\nXarj+EDXWVAKFy5tBwjRJ4RqjCipiC4F4V7YOJGcrKSQ+8ewvUzfVwWMfNpgVQAhWhCgTYZ1FrzH\nSIHwDqkURpsE5Igx9fXkhizP8d7jnaRuWqbTKcPhgJXVAXKYUdcdzjfEaFNdxAX0bIosc7JMo4Qk\nK3OE97TzBhc9uQct91gDQIx47wgiCSqlzlB7wRoCkZQoiBEmVSpW3nLLn/He976PV33jq3jdt3w7\n63mflb5ia/vSdl6pCjwtUigEji645H6cjVg9OMSGHebVFuPxU+zOO3YbyW7V4eURynyDPFtCqREu\n5jRVwxc+9iesaIm1NVZ1KLXEN77+O+kfOk7tMqrZDnd++qP06r+lnZJ7KM3HHz3JbHfO8mUbZEpz\n000v54tffhjvPUoJlpaWkuxF61TlN4v0JlBXNZ1w6UhgO/JnXPBF5lnp9ymy7CJQg1w8twIWrbTB\nAwpjSmazOUVe4n26z+yZwOYmS5N7kZ0LMX6VISiL/++bC3Hp3edrIYn2+QIXP3bxexAJ2NE0DRvr\nq/vHQmChnVP7X5dlGc45Tp8+TZ7nFEWP4VKfojSXtGbHKNHa4D30e31ErFknUnnPZFaTFTn9rESb\nHCEMbTXHhZRRw3kMe5edSBAQnE3gOxUx2oAoabuW2nZEUherlJHpdEKeF9zx2U9x+2dv47prvo43\nfscbOLS0cunkCBqpDHVXkwuN1hkhBhoX6VyPsr/KwFyJC5czWCtZ7gTTeUcXM7wXBHIQOacvTPjS\nHR+mjDXW7SLzHrF3hH/4/f8TtetRNYHoGz73qfezmltOnd7m2cZzd2eRyaNkMm753f/2e/zjt/4Q\nwo5pmym9Xo71DbNqQtuk7E6ep0xF6+r9S7lRJvkFNi1Zr8DZS+kc83bKaHlAPZ6jVE5mDCH6xR9P\n4v1e/4Kg6yxSRLQ0yQFqce4PIewTUrSUtCGQfFBUEidyabCEkM7ue81oUkoWlUZgEVyLaBCRr8En\nvnjEfbZXU1dIuc54Z8zy8vKCk5bS63s1KCEEKMloZSV9HAJtW+3vguk96pQiDQKBoq5btMqY1RMO\nX3uCA4Xh9JlTeB8xUlKUPfr9Hp2bMz63RY5CKIGQkUxKfFwkIboWqdKRKkpJkacaT9W1ZCojehBK\n4l1NppO/5wP3f5m3PfQAhw9vwM1Pv+vDGyNOX9ii1yvTfUxlxODpgiPmksqnv4daPgLSUFoohoKq\nbvFREaLGBs3tf/QHiOokZZ5hsw1e/sq/x4nnvQoveqA68szy6T/7U9z5x9iptijM/4DUsRBiGXgn\ncANpWf9+4CH+Bs5fIsYkljOCk0+d5C9u/TiP3H8/99z/MBd2qnS3EJIYHLPpjCzL9zliZV6AFNTz\niuCTh2FVT76KKHhg+QCz8Zxe2SOGiPPtYmVd9J0LuaijpEmJFMQI1js66xGo/dXb+6Rrss7jg9x/\njq+5swhHJBVYESFlv/jqXeSi280lr3tvdwJBXPysZFuhKLMBs8mMsixxIZDnOWXZS8EpFlm9xc6n\npdrnFBiTpcBXLBCr6fglZFIOyINLPLVznkika1uMMEijCAGU0HiRM1hdY7q5RV+bVJKBRe+JxHiJ\nVEmsSghJ/SAkpTJ4EWl92tWyIicEl2iRSuKc5ezpS48/pfI87/jlzOuO87tTHFAFgQyKTGUEWiyC\n4DOE1EgkymQoafEIfND4AP/zW/85d972Sba2xtz0mm9BiBIR+8ggkbrki3d+gp0z9zIMMFeJePNs\n46+7s/wS8KEY45uFEBroAz/J38D5q2tbnE3Qg82tyG/95rsJIRIQSKMRIq3s/V4Pa+2isUsiF/cX\nKSVlWVLXNU3bIoyhe0ZbaPARowzW2sUxLuxXtJMMJE0W7zxC6cVOsrjDyFSQzLL0b8qOpZS2DyHV\nX77GUQsWgOnAvl/9xfzhi0e6/LP/NelSzz4IL316wSYTge2dMcZZDhxYB5Jjsc7SjpsKuiFB6Lzf\nf23GZPtylj3py17DV5HlKCJRJlSUkTlnz5whzwus90hrUSZNaES6eHspOD+bMRwOKQRIt1g0RGrg\nkyoVRwWR6BeejdKQ9TIa6+iaOv3+CQSfkiu2u1QA+zM/81N8z1vewvOf90KuPLjBrGmYVA3TpiOi\n8CER8BUiscAW71XJPiDJMkPjIsEG/s6rvh0fWhw51idoho2Bh+66g7u/9EkGqmXeOJTJvqrT9pnj\nr0PRHwGvijF+H2mCOGAshPgO4NWLL/st4OOkgNl3/gIeF0LsOX/ddvHz7h1RrHU43aO1C5aCDEhn\niSEFi9YCqQTFwmimtc3+ZN/rlqTuqDvPqacu9lCCqq6wXbfgYmVkZUGms5TmFBEl09FFqEiI6cLc\ntHUCbovUi9513X56WYinJ6LYP8aJ/SPQ3gRPqVizHwD7E/8ZQbV35wkXBd0+bjU9kJqTSDtNUWSs\n9Vbo9UpcqBGiwPmwf7n31qb3tHh+v8gO7mf8YlI+GGPQcrFjLrpO9WLnXt/YoK5qWt/QNhEtJJnW\n6BhTrSmAk5rNyZw8BtZ7A2Qh9lPhMQQMi1qOXOzcdEiVUxhNYRRNZyFEJm0FQaLVpRoG6zp+53d+\nG4HgO7/jTbz0ZS/nivUlmqg4P50xmXZkOgVJqtorpNAEUrofJ1DSIHolzlpQObGxZFmPzrY89dBd\n3PaRd1PoDh+7ROmn+JsHC3AC2BRCvAu4EbgT+Jc8u/PXxYHxNZ2/nHP7cpbOukTBlwKtDFqnImTS\ngSUSpbUOSEAJISVGKYSISSncNJw8s4N5hnNTv1fgFtXsuq6Zzecwn6eMWJbRywusdQvwRUbbthiz\nEHHGgBBP45asDbAfNIve9cVu9Myj2J5t+f7Os9g+9o5deyapf9nYP64B/2975x5r+VXV8c/ae/9+\nv/O4M/fOvTNTSh9UqBVINBQCMTwsihE0hliNRIkGCWriH2A0QSyGSOIfNSAhRmJI8BFoCMYgiTQq\nARKQWqQ+6NhCX7ahtNPOszP3eR6/3957+cfev3POvTOdXsvMvZCclZzc87rnt8757fXba33Xd60l\n5C4sJi3AGAOj0QBrlYBiXIHAZLcNISAkMqWHScyV9LUTow4ihDqAQJmJqa371+mWDDfGVJ00DTn6\nhqJjWV5e4vHvPoFoiVVPQ+DJrQ36I1joVQTfJN6aKjaXBxQuDRAa1VuocWn0n3UEa+j1+4wG9QWl\nFVtbG3S6Jc4Z/vnLd/K5z3+ON77hFt74pp/mcG+Bq5eOcvqZNdY8xCZ5B+Bw0kGc4CMYKRDfEGMq\n5yhLZRA2WT11ggf/9R855EaEYIjiMCkRxrMTj5LsJs/igFcCf6mqrwS2SDvIRJ7P5C/n3CSoTVdr\nSbyvqMQ6YfWlsWm0QPQ4ZwCPCxHjPbEeoqo89fRpTpx4ml6lXH/N9mnFxjgMidJeWEe/6lGagsqW\nEODp06dY3VhnazRkMBpleo3Ni73JsG0ACSgBHxrasQqzxqAwiROipt0nhAQkJEk0HoMFTXHANhfO\npJvK9h9K8k0lEuoxPQTrkgn5oDnBGFJziDo17Jg1iJZU2RItJSmKIbtdeTS4NoEYhNGwYbA1Bi0o\nKBmub+X5izVvu/UW3vPudyXOWRjjNeCjoKFg6IUz5wcM6kgjltpCkHQRanzAiFAVJSZKdtc8EpVC\nlcNLfZYObL/IFZVjY3MTP27wW1tUhXDXXV/lAx94P3d88hOsnn2KpR7cdM1Rrj96hH51gKLsg0ng\nhTMOQsSopXKKs6DxIPX5Ne6/+/P4zWdS8tZ4KmfplhXGCE095lKym53lOHBcVf8zP/4scBtw8nuZ\n/HXswYexxhJiZGVxkaMrywmlUib8p6ZpMGXqmj8YDBARbOGwRtgaRs6ffIazZ7c4fHSFQ8v9C0xy\nNB5hNBmjcw7NAT0kY10+dGiyqHzwrK0Nc6lyl6rqADJZ+M45fNMmIdmWJ7kQxZrKztdlkqncARe3\n78nZdiEH7ChIxFlHv98Dibm82k8Yx60blup9zMRtbGMT1cxEyIdrDT0CGhJ5snUl26YetizY2go8\nc+4cP/7qH+OpU+e447Y/QkPi6SUIPRk46hAnjBqP39qiQ0FvwWBdDzUVsUkdVLpFSd00DEcjjEvF\ne8NBPamabaVXFljtYDQyHg4RH7CuoigKvvWt+7j/vge4+uqr+Z3fejfdgwe5/sgB1gee85uWorAM\n64gaAxjEdqmDZ7B+km/c9UWOP/YgfZvYDpV1bA03Wd9cb7NHl5TdzGc5KSJP5iD9EdJMlm/n2zt4\nnpO/XnzdC1FJiFRVdCZw7jgzgXu93uTESU521XXDgX4HXxvWN4acX93iyNEVFpd6qI7RsH2jdKaT\najY0EENEXFrkbT6iwOBVUx2MEei6CbVlMDiPSHpvQsMC/X6fenVtEofMZvl3/Gg7HqaGEe0GbOw0\nD7PT4NJfzW6BmeZt8m7UNPVk9wAmAXtRFEiYGnAbN7WPnXNp126TrTmemcRUMnXXgkaCgf7SIlur\na/zb14/xzWMFMfdvgzYOSzw6LyBqE93Fp0rNl15/hGuOLPPV/3qA6KpEDA4N1kT6nZKxT+fCiaSp\n0TPSNZZut4ePAR8idYgMBwNM4bDWIKKcOXOK22//IAcXF3nHu36bQytHWFo+woZveCaM2ayFoB1Q\nQzMc8PB9d3H84XvoGI+LgooiqhzsVCx0jqCZonTy9CmeTXaLhr0b+LSIlMBjJOjY8j1M/qJIvmzU\ngA1hsoC7nVQhORhs5YYWNXUO0quqoomejc2aM2fXOHz0KpaXK8YjjzWdCziJdd2w0OtnNKwg0OSq\nSs1BruBCchOiKj6GSYZ/YeEAIcQU6+Sa/fXNAZ1uLy/06eLe+fW2P5dau7bPtUnGdpFua3ZHuxO1\nn99uQUKv14MIVVVty/+04r3fNuatfa11xZqmwTDVoa5r2ra4MUaitrU+Cfb1ViEopujgfc36sMHE\ngNEA2gIa099BFZSEcplOwdFF4U03H+Edb/89Pvkv9/O1f78HHw3jwQABOpXDB9Lgpbg9Zgl1ir8K\n57BFSaFgisDGxjpiIoXrAwVR4PQz5/jIn32ExUMH+I23/QrXvOQmDh5d5PTqiPVBzermkO8+/AD/\nffeXWTBDjAQK6WKco65HgNCMhwRN8dalZLcDWP8HePVFXnrek7/Wz53HGEtvoU+QgK9rep0uEj1l\nPy2I0WhAx1pGwwFVp0dRdTn+9AnOr57nqquO0F8oiN5jUARNfvyMCJaNzQGqSt0MwdRAIk+KCCpu\ngg6lmo8CnGXc1DR1SnAGiRw4tIQfNmBr6nqMUKKkdj0T5m2uIVFVUIvJC7FlHM9qpTGhU0Z0YniT\nHSTHLZp6rBILxapnZWmBsa8pKBAxGJ9OHc0AAA0jSURBVGmPpcSQjAKjGDs1AN8kXZo60jQ+I3Tt\n7mMwGd2THEfMQs7aSNZVqX1kHBv6zuYxgpbWTjRTnVUDzgBqcDbSLftoEPxojZfddA2/9psf5t6H\nTvJXH/s49fqQuvFoCHQtDHfylKwjqBJDRH0ynIOFo7d0kFGIaLTUdUMTElLZ6Qjrq+f4i7/+ON1O\nn1tv/WVuuOHFFIOac6dW+cYX/p5+2MQIWGPxcQR12weioSo7NMFsa9p4MdnHLvqHqeua4XBIKAp8\n3SAK/W6XKIJYQ9HppuE3BxY5t7rB+TPPsLa+wcrKClWnmwmUHmMso7Gf+OqttAswQc0Gn3MoIpm7\npUJVdVP1X9WjHo/TfEHnGNdp+GfZ6dL4mqIoKQtH3UA8u7ntGNMZKOkKafNC3kl72bmDzD4/gZ7N\n9LEqaPAUZUFpHczkSmahapMh2qIsaGqPl5inoIVtQMQUcEiI3tg3E5QsRvA+Tt4vGRkKITXMGHlh\nMGroli6hT5oMRaOCpstVwGAJVCZiCbhOn+Nn1hjUnjBc59WvejlvvPOzPHL/ffzJbR9k68wqSEG1\no6F7+5u236/tNtrUNWVVgRoK24HcLNFKmhs6HNd43/CpT/0tqkpZdjk3bKhMQGkSFSZKzp+V2/Je\nzwUbwz6yjkPj6VYdjhw+Qq/Xm/jWw/GYwbjm3PoGw7qh8ZHV9S0GozGnTp/m2muuZ/nQVQgFwUOt\nkSgObMGOjWVbMlDawrKQBqwGrzR1YDSsU+O3xmMVQu1RH6hsQceVGE1dU9qTl3YMsy3/AdsNs12A\ns7HIbIyyDUmbeWxMgsbb/xNJlJilhQMYZFIz3+ZNWqQLMlTsQ0baUjfO1tVqx3a0XLjgA3XwNDES\nFOqQmMmqZIOXiavXvr+0FcZYxiEwjoFowKsSDIkh3iJwUYhiKRaOsMYCT24IVdkjjEC9J6rwohte\nzB9/6EO89FWvZFOheRb3x1mHEUOn05nu3N5jiVQumXOnLHBi6FYlC/0uYmBcDwnB04xHHHTKgjMT\nuL/Nzc26yu1F7lJADexrw4p0xdgabGELx+JimmSlKjx1/GkOHFhMvYQr4dz5VZrGc+ONL6VpasZN\nqtcfjkZgzQSy1bCdoj8xFkmN5WxRYjMkJKoUVUWMgaJMV20Rhw9KURaTIHo4HNLvlwwGIzAOLwYr\nubyZNu5oyZ0zmfodO4aSi9aQyQJvk6vbELMZ42qHDlVVhWqkU5YMmzoHuSbFDEZoxskojLNE72nn\nsLTwdbtjNcES1KMhtyyNea6NKsELgmYwZfo7Ji6bEmolkGK7TlUyGAzpdDo0scE5M0HXBMWr5dHV\nmkNjxwtf9BIeefgxPvznn+CR754glAuM17eIGeSoVTFhO/MCYi6TTjVNMWYELqTGJGnhB0RDLrGW\nxOzwqSHh4kIPn11T9Q2bjUJmFhRFRQgpId32PUh95S4f3eWyS1VViQCIMBwM0wyVsqQoCq67/lrq\nsWc0ajh5ZpUQPFcdOcpwXCMmZbcH4xGuaEmBiuT6yllpYl6EChgLJs2M75V2EmuE6PGjcSJZhtQY\nwwXHgf4CvvGJP1Y39KuSaAxNo/Qqx8gr07EN0yAX0sQxl5ObPoQp2XIm+daiXhFNbWBz32FhaizW\nWlxRYsqCqDGzeEMa9KOgakAsMQc6qYlDpK7T4htlt6qp06SuoFPGgYlKyAwJABslcd8yZcjm4Uam\nLQZzBquptr1pIsaVjGpPEwKOBGcbIt4HOt0l7nngcb709f/AWkNRHsIVJrW+sgaKTmINqiaGxo4L\nuheTjSNmur+B3M7J5AtUwuJykjqfg661qXuLNTQaoVCkKLCW1NrKx0lcYvOguACTXthcemPZ/7Li\nEFKte6fTIYTAuPG4okOQyPrWEFcWLB9cScVOIblBIppo2zFizJT3tFNaWNWY6TAk55KhpLkuqdFD\n6loCMRrKopez5KNJLXsIAc30XxFhcXmZ0xsnErAgszGGbrvNQssiktBgSVC1sp3RnP5fJrmT9vO6\nRYVEZeRrCrHE/P6oIZXBZhRPQyTkC0bLQGh8NlJjMyKXdkBVUozSTHM+knfDJmR3UhWRIjXty8m+\nKcEzGfvNN9/M3XffTae7gMaQaPqqPHF2lbJ0uPIQxloiSpMX6mhcp9imSAlo33hM3L5Ky/5BjEaG\n4w26IdUIRY15F0kwv4bUFWc2sW1camYSfIPBUlhHbBLR0+KQskJRvG8yH9Dg6zA5B3rJvPq+lhWn\nL1iWJWKnJ8BgaULk3Oo6BxYPgYAtXHJyRAhNxBoLGnNgmq6Es8F0K1NOV/r8cV3jjKCkfEGqrXco\naT68swXt6rHWMBgMpt1SRIjGMK4DZVFMGl4AF7hS7YKa9YUT+hZSXKBTt2M2sJTsqrX+eQyRxYWD\naIzpN2pPrqYeYr72tIGaqlLHVF/S7krNLBKnIBnyRRPjwLkqw7/K0sISq+trWFvhNWAiWGGSeGy/\nw2xN0LFjxzITA4Imyr8rpnX/RIPgqHWAKwrqpsYEBSxKjTUGY9wFFa7DJmBiJNa5L3PdUJRlUggS\npD3zu00vRnVqoiEWTDpKu1tYUpVtIq2GSbReOEfjfc6DfZ/W4LeJssToLTGSBuhEhadPnWI0qjm0\nuEKjIeVAWu4S+eTENOs9asx5iQg7fM6vveXrz63Id0jst/+vvPl5/M9zyUV0OcmTV+BAF8qju9Bl\nr6QuOzRxTG9oEOtY2xrxAlegsUFLzeBGQerPGClcCRppmmwbJjElLAIGmpAK0yQGQvAUYmlSFxBE\nEzMaK/hwaT9s39CwWYkhuUEiwokTp1k+tMKP3HQToBSmIITkdllbIMbkOIDUt9GWGFsixmKeI6l0\nUXn8Mn+Z70Ue328FZuTx/Tv0W9/5dm7/2EexZYGK4fxgiBUwCGWALoLzDQ5Nc0hD6vustgJb4Sgp\ncZRS4iQ12NCM9rXupBiIoaFAU8w1HlE+h177N9o7J8CUhJGjgTNnzlJV3VQjPxgwocL7SOowJZOm\ndxO/PkIkJdkunVKayw+K/PDrXkHn4CJNU2NCmKBWpbGYCCYH8FYUsUJVlQxHQ4JRTDBYk4wkmBqN\nEGJys0OIU3RSI+ARk1sgaXL9LiX7Ch2HELDOAoaTJ89w4403srq2NUmexRhxUqGaWMmQdp8WvXDO\n0TQhZWVDIITIa7/wuonPHIJOWMKgJETUbJsEdvypJ3hR+KFJ18bxeJxbIg1QnaInpXOTgD09Z7n/\ngQeRmaxvGzOVRedCSJgZHpi5cPipiLC1uU7/sRWCbzjY77F4cIHQpN+irRfxwWSXNGb0bZrXCZrQ\nqTauKGQKFEzZ3Rl8yNTBNmi3YRakiKyun2XliZXJZxeFncRgKVabAhujYZMheDOZpyOSmBLGGIpI\natfadfziO36dhx55gFt/9Ze47tqr2RoOOby4TKfb5czWGuveE2zg7JlT9IoSIVLk5ubRpIIFDIh1\nFBryGnEs9TusOGGkghfHwEdCtHiJeKOJMhOnuqmGCedPNaJWqcP2svSdIhejbV1pEZG9P+hc5rJL\n0Wfp47ovxjKXufwgyvdFgD+XufwgyNxY5jKXXcqeG4uIvEVEHhKR/5XUFeZKH+9vROSUiNw/89yy\niHxJRB4RkS9KavXUvnZb1u0hEfmZy6jHdSLyFRH5toh8S0Tes4+6dETkHhE5JiIPiMjt+6XLzOdb\nEblXRO7cb12eVXZSNK7kjVQw9ihwA1AAx4CXXeFjvgG4Gbh/5rkPAX+Q778P+NN8/+VZpyLr+Chg\nLpMeLwBeke8vAA8DL9sPXfLn9/JfR2ow8vr90iUf4/eBTwOf369z9Fy3vd5ZXgM8qqqPa2qV9Hek\n1klXTFT1LuD8jqffSmrfRP77C/n+pI2Tqj5OOhGvuUx6nFTVY/n+JvAgqex6z3XJOgzy3ZJ0ETu/\nX7qIyLXAz5EaObZI1L7ocinZa2O5BrbxNy7aJmkP5FJtnGabj10R/UTkBtJud89+6SIiRkSO5WN+\nRVW/vV+6AB8F3gvbiOP7eo4uJnttLN93OLWmvf1Sel1WnUVkAfgH4HdVdWP2tb3URVWjqr6C1H3n\nJ0TkJ/dDFxH5eeC0qt7Ls5Dk9/ocPZvstbHsbJN0HduvEnslp0TkBQDyPNo4PV8RkYJkKHeoatsN\nZ190aUVV14B/Al61T7q8FniriHwH+AzwUyJyxz7pcmnZi8BoJohzpO4wN5B85Sse4Ofj3sCFAf77\n8v0/5MLgsSRxbh8jJ24vgw4CfAr46I7n90OXw8BSvt8Fvga8aT902aHXLcCd+/W7PKd+e3GQHT/I\nz5KQoEeB2/bgeJ8BngZqUrz0TmAZ+DLwCPDFduHk978/6/YQ8ObLqMfrST75MeDefHvLPunyo8A3\nsy73Ae/Nz++5Ljv0uoUpGravulzsNqe7zGUuu5R5Bn8uc9mlzI1lLnPZpcyNZS5z2aXMjWUuc9ml\nzI1lLnPZpcyNZS5z2aXMjWUuc9mlzI1lLnPZpfwfVKttii8aM/YAAAAASUVORK5CYII=\n",
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# or show the bounding box of the referred object\n",
"refer.showRef(ref, seg_box='box')\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"sent_id[64727]: woman in front\n",
"sent_id[64728]: lady smiling\n",
"sent_id[64729]: woman\n"
]
}
],
"source": [
"# let's look at the details of each ref\n",
"for sent in ref['sentences']:\n",
" print 'sent_id[%s]: %s' % (sent['sent_id'], sent['sent'])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
================================================
FILE: refer/refer.py
================================================
"""
This interface provides access to four datasets:
1) refclef
2) refcoco
3) refcoco+
4) refcocog
split by unc and google
The following API functions are defined:
REFER - REFER api class
getRefIds - get ref ids that satisfy given filter conditions.
getAnnIds - get ann ids that satisfy given filter conditions.
getImgIds - get image ids that satisfy given filter conditions.
getCatIds - get category ids that satisfy given filter conditions.
loadRefs - load refs with the specified ref ids.
loadAnns - load anns with the specified ann ids.
loadImgs - load images with the specified image ids.
loadCats - load category names with the specified category ids.
getRefBox - get ref's bounding box [x, y, w, h] given the ref_id
showRef - show image, segmentation or box of the referred object with the ref
getMask - get mask and area of the referred object given ref
showMask - show mask of the referred object given ref
"""
import sys
import os.path as osp
import json
import pickle as pickle
import time
import itertools
import skimage.io as io
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Polygon, Rectangle
from pprint import pprint
import numpy as np
from pycocotools import mask
class REFER:
def __init__(self, data_root, dataset='refcoco', splitBy='unc'):
# provide data_root folder which contains refclef, refcoco, refcoco+ and refcocog
# also provide dataset name and splitBy information
# e.g., dataset = 'refcoco', splitBy = 'unc'
print('loading dataset %s into memory...' % dataset)
if dataset == 'refcocog':
print('Split by {}!'.format(splitBy))
self.ROOT_DIR = osp.abspath(osp.dirname(__file__))
self.DATA_DIR = osp.join(data_root, dataset)
if dataset in ['refcoco', 'refcoco+', 'refcocog']:
self.IMAGE_DIR = osp.join(data_root, 'images/mscoco/images/train2014')
elif dataset == 'refclef':
self.IMAGE_DIR = osp.join(data_root, 'images/saiapr_tc-12')
else:
print('No refer dataset is called [%s]' % dataset)
sys.exit()
# load refs from data/dataset/refs(dataset).json
tic = time.time()
ref_file = osp.join(self.DATA_DIR, 'refs(' + splitBy + ').p')
self.data = {}
self.data['dataset'] = dataset
f = open(ref_file, 'r')
self.data['refs'] = pickle.load(open(ref_file, 'rb'))
# load annotations from data/dataset/instances.json
instances_file = osp.join(self.DATA_DIR, 'instances.json')
instances = json.load(open(instances_file, 'r'))
self.data['images'] = instances['images']
self.data['annotations'] = instances['annotations']
self.data['categories'] = instances['categories']
# create index
self.createIndex()
print('DONE (t=%.2fs)' % (time.time() - tic))
def createIndex(self):
# create sets of mapping
# 1) Refs: {ref_id: ref}
# 2) Anns: {ann_id: ann}
# 3) Imgs: {image_id: image}
# 4) Cats: {category_id: category_name}
# 5) Sents: {sent_id: sent}
# 6) imgToRefs: {image_id: refs}
# 7) imgToAnns: {image_id: anns}
# 8) refToAnn: {ref_id: ann}
# 9) annToRef: {ann_id: ref}
# 10) catToRefs: {category_id: refs}
# 11) sentToRef: {sent_id: ref}
# 12) sentToTokens: {sent_id: tokens}
print('creating index...')
# fetch info from instances
Anns, Imgs, Cats, imgToAnns = {}, {}, {}, {}
for ann in self.data['annotations']:
Anns[ann['id']] = ann
imgToAnns[ann['image_id']] = imgToAnns.get(ann['image_id'], []) + [ann]
for img in self.data['images']:
Imgs[img['id']] = img
for cat in self.data['categories']:
Cats[cat['id']] = cat['name']
# fetch info from refs
Refs, imgToRefs, refToAnn, annToRef, catToRefs = {}, {}, {}, {}, {}
Sents, sentToRef, sentToTokens = {}, {}, {}
for ref in self.data['refs']:
# ids
ref_id = ref['ref_id']
ann_id = ref['ann_id']
category_id = ref['category_id']
image_id = ref['image_id']
# add mapping related to ref
Refs[ref_id] = ref
imgToRefs[image_id] = imgToRefs.get(image_id, []) + [ref]
catToRefs[category_id] = catToRefs.get(category_id, []) + [ref]
refToAnn[ref_id] = Anns[ann_id]
annToRef[ann_id] = ref
# add mapping of sent
for sent in ref['sentences']:
Sents[sent['sent_id']] = sent
sentToRef[sent['sent_id']] = ref
sentToTokens[sent['sent_id']] = sent['tokens']
# create class members
self.Refs = Refs
self.Anns = Anns
self.Imgs = Imgs
self.Cats = Cats
self.Sents = Sents
self.imgToRefs = imgToRefs
self.imgToAnns = imgToAnns
self.refToAnn = refToAnn
self.annToRef = annToRef
self.catToRefs = catToRefs
self.sentToRef = sentToRef
self.sentToTokens = sentToTokens
print('index created.')
def getRefIds(self, image_ids=[], cat_ids=[], ref_ids=[], split=''):
image_ids = image_ids if type(image_ids) == list else [image_ids]
cat_ids = cat_ids if type(cat_ids) == list else [cat_ids]
ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
if len(image_ids) == len(cat_ids) == len(ref_ids) == len(split) == 0:
refs = self.data['refs']
else:
if not len(image_ids) == 0:
refs = [self.imgToRefs[image_id] for image_id in image_ids]
else:
refs = self.data['refs']
if not len(cat_ids) == 0:
refs = [ref for ref in refs if ref['category_id'] in cat_ids]
if not len(ref_ids) == 0:
refs = [ref for ref in refs if ref['ref_id'] in ref_ids]
if not len(split) == 0:
if split in ['testA', 'testB', 'testC']:
refs = [ref for ref in refs if split[-1] in ref['split']] # we also consider testAB, testBC, ...
elif split in ['testAB', 'testBC', 'testAC']:
refs = [ref for ref in refs if ref['split'] == split] # rarely used I guess...
elif split == 'test':
refs = [ref for ref in refs if 'test' in ref['split']]
elif split == 'train' or split == 'val':
refs = [ref for ref in refs if ref['split'] == split]
else:
print('No such split [%s]' % split)
sys.exit()
ref_ids = [ref['ref_id'] for ref in refs]
return ref_ids
def getAnnIds(self, image_ids=[], cat_ids=[], ref_ids=[]):
image_ids = image_ids if type(image_ids) == list else [image_ids]
cat_ids = cat_ids if type(cat_ids) == list else [cat_ids]
ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
if len(image_ids) == len(cat_ids) == len(ref_ids) == 0:
ann_ids = [ann['id'] for ann in self.data['annotations']]
else:
if not len(image_ids) == 0:
lists = [self.imgToAnns[image_id] for image_id in image_ids if
image_id in self.imgToAnns] # list of [anns]
anns = list(itertools.chain.from_iterable(lists))
else:
anns = self.data['annotations']
if not len(cat_ids) == 0:
anns = [ann for ann in anns if ann['category_id'] in cat_ids]
ann_ids = [ann['id'] for ann in anns]
if not len(ref_ids) == 0:
ids = set(ann_ids).intersection(set([self.Refs[ref_id]['ann_id'] for ref_id in ref_ids]))
return ann_ids
def getImgIds(self, ref_ids=[]):
ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
if not len(ref_ids) == 0:
image_ids = list(set([self.Refs[ref_id]['image_id'] for ref_id in ref_ids]))
else:
image_ids = self.Imgs.keys()
return image_ids
def getCatIds(self):
return self.Cats.keys()
def loadRefs(self, ref_ids=[]):
if type(ref_ids) == list:
return [self.Refs[ref_id] for ref_id in ref_ids]
elif type(ref_ids) == int:
return [self.Refs[ref_ids]]
def loadAnns(self, ann_ids=[]):
if type(ann_ids) == list:
return [self.Anns[ann_id] for ann_id in ann_ids]
elif type(ann_ids) == int or type(ann_ids) == unicode:
return [self.Anns[ann_ids]]
def loadImgs(self, image_ids=[]):
if type(image_ids) == list:
return [self.Imgs[image_id] for image_id in image_ids]
elif type(image_ids) == int:
return [self.Imgs[image_ids]]
def loadCats(self, cat_ids=[]):
if type(cat_ids) == list:
return [self.Cats[cat_id] for cat_id in cat_ids]
elif type(cat_ids) == int:
return [self.Cats[cat_ids]]
def getRefBox(self, ref_id):
ref = self.Refs[ref_id]
ann = self.refToAnn[ref_id]
return ann['bbox'] # [x, y, w, h]
def showRef(self, ref, seg_box='seg'):
ax = plt.gca()
# show image
image = self.Imgs[ref['image_id']]
I = io.imread(osp.join(self.IMAGE_DIR, image['file_name']))
ax.imshow(I)
# show refer expression
for sid, sent in enumerate(ref['sentences']):
print('%s. %s' % (sid + 1, sent['sent']))
# show segmentations
if seg_box == 'seg':
ann_id = ref['ann_id']
ann = self.Anns[ann_id]
polygons = []
color = []
c = 'none'
if type(ann['segmentation'][0]) == list:
# polygon used for refcoco*
for seg in ann['segmentation']:
poly = np.array(seg).reshape((len(seg) / 2, 2))
polygons.append(Polygon(poly, True, alpha=0.4))
color.append(c)
p = PatchCollection(polygons, facecolors=color, edgecolors=(1, 1, 0, 0), linewidths=3, alpha=1)
ax.add_collection(p) # thick yellow polygon
p = PatchCollection(polygons, facecolors=color, edgecolors=(1, 0, 0, 0), linewidths=1, alpha=1)
ax.add_collection(p) # thin red polygon
else:
# mask used for refclef
rle = ann['segmentation']
m = mask.decode(rle)
img = np.ones((m.shape[0], m.shape[1], 3))
color_mask = np.array([2.0, 166.0, 101.0]) / 255
for i in range(3):
img[:, :, i] = color_mask[i]
ax.imshow(np.dstack((img, m * 0.5)))
# show bounding-box
elif seg_box == 'box':
ann_id = ref['ann_id']
ann = self.Anns[ann_id]
bbox = self.getRefBox(ref['ref_id'])
box_plot = Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], fill=False, edgecolor='green', linewidth=3)
ax.add_patch(box_plot)
def getMask(self, ref):
# return mask, area and mask-center
ann = self.refToAnn[ref['ref_id']]
image = self.Imgs[ref['image_id']]
if type(ann['segmentation'][0]) == list: # polygon
rle = mask.frPyObjects(ann['segmentation'], image['height'], image['width'])
else:
rle = ann['segmentation']
m = mask.decode(rle)
m = np.sum(m, axis=2) # sometimes there are multiple binary map (corresponding to multiple segs)
m = m.astype(np.uint8) # convert to np.uint8
# compute area
area = sum(mask.area(rle)) # should be close to ann['area']
return {'mask': m, 'area': area}
def showMask(self, ref):
M = self.getMask(ref)
msk = M['mask']
ax = plt.gca()
ax.imshow(msk)
if __name__ == '__main__':
refer = REFER(dataset='refcocog', splitBy='google')
ref_ids = refer.getRefIds()
ref_ids = refer.getRefIds(split='train')
print('There are %s training referred objects.' % len(ref_ids))
for ref_id in ref_ids:
ref = refer.loadRefs(ref_id)[0]
if len(ref['sentences']) < 2:
continue
print('The label is %s.' % refer.Cats[ref['category_id']])
plt.figure()
refer.showRef(ref, seg_box='box')
plt.show()
================================================
FILE: refer/setup.py
================================================
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
import numpy as np
ext_modules = [
Extension(
'external._mask',
sources=['external/maskApi.c', 'external/_mask.pyx'],
include_dirs = [np.get_include(), 'external'],
extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99'],
)
]
setup(
name='external',
packages=['external'],
package_dir = {'external': 'external'},
version='2.0',
ext_modules=cythonize(ext_modules)
)
================================================
FILE: refer/test/sample_expressions_testA.json
================================================
{"predictions":[{"sent":"man in black","ref_id":47},{"sent":"person on right","ref_id":109},{"sent":"woman in red","ref_id":110},{"sent":"car behind bike","ref_id":111},{"sent":"car on left","ref_id":112},{"sent":"man in blue","ref_id":382},{"sent":"man in white","ref_id":383},{"sent":"left person","ref_id":519},{"sent":"man on right","ref_id":520},{"sent":"person in background","ref_id":525},{"sent":"person on left","ref_id":526},{"sent":"man in white","ref_id":527},{"sent":"guy in white","ref_id":528},{"sent":"guy in red","ref_id":537},{"sent":"white shirt","ref_id":538},{"sent":"player in white","ref_id":539},{"sent":"red shirt","ref_id":557},{"sent":"girl","ref_id":558},{"sent":"baby","ref_id":588},{"sent":"baby","ref_id":589},{"sent":"woman in front","ref_id":640},{"sent":"girl","ref_id":641},{"sent":"right guy","ref_id":732},{"sent":"man in white","ref_id":733},{"sent":"middle guy","ref_id":734},{"sent":"woman","ref_id":756},{"sent":"man on right","ref_id":757},{"sent":"woman","ref_id":814},{"sent":"man in white","ref_id":815},{"sent":"man in white shirt","ref_id":828},{"sent":"woman on right","ref_id":829},{"sent":"man in red","ref_id":931},{"sent":"woman in pink","ref_id":932},{"sent":"girl in pink","ref_id":933},{"sent":"middle guy","ref_id":945},{"sent":"second from right","ref_id":946},{"sent":"left guy","ref_id":947},{"sent":"white jacket","ref_id":954},{"sent":"right guy","ref_id":955},{"sent":"blue jacket","ref_id":956},{"sent":"man in white shirt","ref_id":1023},{"sent":"man","ref_id":1024},{"sent":"man in back","ref_id":1052},{"sent":"left guy","ref_id":1053},{"sent":"woman on right","ref_id":1152},{"sent":"woman on right","ref_id":1153},{"sent":"left guy","ref_id":1154},{"sent":"woman on right","ref_id":1333},{"sent":"man in black shirt","ref_id":1334},{"sent":"man","ref_id":1362},{"sent":"man","ref_id":1363},{"sent":"right guy","ref_id":1371},{"sent":"left guy","ref_id":1372},{"sent":"man in front","ref_id":1406},{"sent":"man on left","ref_id":1407},{"sent":"person on right","ref_id":1568},{"sent":"person in front","ref_id":1569},{"sent":"man in black","ref_id":1582},{"sent":"man in front","ref_id":1583},{"sent":"right skier","ref_id":1623},{"sent":"person in front","ref_id":1624},{"sent":"second from left","ref_id":1679},{"sent":"man on left","ref_id":1680},{"sent":"second from right","ref_id":1681},{"sent":"left guy","ref_id":1682},{"sent":"woman on right","ref_id":1683},{"sent":"girl on right","ref_id":1684},{"sent":"man on right","ref_id":1811},{"sent":"man in front of man in white shirt","ref_id":1812},{"sent":"woman in white shirt","ref_id":1861},{"sent":"man in black","ref_id":1862},{"sent":"groom","ref_id":1882},{"sent":"bride","ref_id":1883},{"sent":"middle guy","ref_id":1977},{"sent":"left guy","ref_id":1978},{"sent":"right guy","ref_id":1979},{"sent":"second from left","ref_id":1980},{"sent":"person on left","ref_id":1990},{"sent":"left person","ref_id":1991},{"sent":"player","ref_id":2001},{"sent":"top left corner","ref_id":2002},{"sent":"girl in white on left","ref_id":2129},{"sent":"white shirt","ref_id":2130},{"sent":"woman in white","ref_id":2131},{"sent":"red jacket","ref_id":2173},{"sent":"red","ref_id":2174},{"sent":"catcher","ref_id":2256},{"sent":"umpire","ref_id":2257},{"sent":"baby","ref_id":2264},{"sent":"man","ref_id":2265},{"sent":"boy in blue","ref_id":2291},{"sent":"boy in red","ref_id":2292},{"sent":"man in black","ref_id":2375},{"sent":"man in black","ref_id":2376},{"sent":"blue jacket","ref_id":2721},{"sent":"bottom left","ref_id":2722},{"sent":"man","ref_id":2767},{"sent":"man","ref_id":2768},{"sent":"batter","ref_id":2805},{"sent":"right guy","ref_id":2806},{"sent":"batter","ref_id":2807},{"sent":"woman in black","ref_id":2981},{"sent":"woman in white","ref_id":2982},{"sent":"left girl","ref_id":3247},{"sent":"man in white","ref_id":3248},{"sent":"man on left","ref_id":3257},{"sent":"woman in middle","ref_id":3258},{"sent":"woman on right","ref_id":3259},{"sent":"man in middle","ref_id":3260},{"sent":"guy on right","ref_id":3366},{"sent":"left person","ref_id":3367},{"sent":"girl in pink","ref_id":3768},{"sent":"girl in pink","ref_id":3769},{"sent":"right guy","ref_id":3772},{"sent":"man","ref_id":3773},{"sent":"man in blue shirt","ref_id":3805},{"sent":"person in blue shirt","ref_id":3806},{"sent":"man in black","ref_id":3807},{"sent":"guy in red","ref_id":4002},{"sent":"second horse from left","ref_id":4003},{"sent":"guy in blue shirt","ref_id":4014},{"sent":"man in blue shirt","ref_id":4015},{"sent":"left person","ref_id":4016},{"sent":"man in blue","ref_id":4017},{"sent":"girl on right","ref_id":4089},{"sent":"girl","ref_id":4090},{"sent":"woman","ref_id":4101},{"sent":"girl","ref_id":4102},{"sent":"woman in black","ref_id":4143},{"sent":"person sitting on left","ref_id":4144},{"sent":"man in black","ref_id":4145},{"sent":"white shirt","ref_id":4159},{"sent":"man on right","ref_id":4160},{"sent":"right girl","ref_id":4174},{"sent":"left girl","ref_id":4175},{"sent":"person on right","ref_id":4176},{"sent":"girl on left","ref_id":4177},{"sent":"woman in red","ref_id":4178},{"sent":"bride","ref_id":4187},{"sent":"right kid","ref_id":4188},{"sent":"person on left","ref_id":4190},{"sent":"woman in black","ref_id":4191},{"sent":"woman in white","ref_id":4192},{"sent":"left person","ref_id":4308},{"sent":"person on left","ref_id":4309},{"sent":"man on left","ref_id":4333},{"sent":"man in blue shirt","ref_id":4334},{"sent":"batter","ref_id":4345},{"sent":"catcher","ref_id":4346},{"sent":"umpire","ref_id":4347},{"sent":"man on right","ref_id":4396},{"sent":"right person","ref_id":4397},{"sent":"woman in blue","ref_id":4461},{"sent":"man in black","ref_id":4462},{"sent":"man on left","ref_id":4463},{"sent":"woman on right","ref_id":4485},{"sent":"woman","ref_id":4486},{"sent":"right horse","ref_id":4487},{"sent":"right horse","ref_id":4488},{"sent":"left horse","ref_id":4489},{"sent":"woman","ref_id":4578},{"sent":"woman","ref_id":4579},{"sent":"left bottom corner","ref_id":4580},{"sent":"woman in blue","ref_id":4581},{"sent":"head bottom right","ref_id":4582},{"sent":"woman","ref_id":4616},{"sent":"man","ref_id":4617},{"sent":"woman in black","ref_id":4711},{"sent":"left horse","ref_id":4712},{"sent":"umpire","ref_id":4765},{"sent":"catcher","ref_id":4766},{"sent":"man on right","ref_id":4868},{"sent":"man on right","ref_id":4869},{"sent":"man","ref_id":4870},{"sent":"white shirt","ref_id":5012},{"sent":"right guy","ref_id":5118},{"sent":"man in white","ref_id":5119},{"sent":"woman","ref_id":5149},{"sent":"man","ref_id":5150},{"sent":"right dog","ref_id":5170},{"sent":"left dog","ref_id":5171},{"sent":"woman","ref_id":5244},{"sent":"woman","ref_id":5245},{"sent":"woman in black","ref_id":5289},{"sent":"woman in black","ref_id":5290},{"sent":"man on right","ref_id":5291},{"sent":"person on left","ref_id":5292},{"sent":"woman in black","ref_id":5293},{"sent":"person on right","ref_id":5309},{"sent":"woman in red","ref_id":5310},{"sent":"girl","ref_id":5311},{"sent":"person on right","ref_id":5389},{"sent":"man","ref_id":5390},{"sent":"man","ref_id":5550},{"sent":"woman on right","ref_id":5551},{"sent":"man on left","ref_id":5552},{"sent":"man in white","ref_id":5615},{"sent":"woman in red","ref_id":5648},{"sent":"woman in blue","ref_id":5649},{"sent":"woman in black","ref_id":5650},{"sent":"catcher","ref_id":5767},{"sent":"batter","ref_id":5768},{"sent":"umpire","ref_id":5769},{"sent":"couch on left","ref_id":5776},{"sent":"couch on right","ref_id":5777},{"sent":"guy in red","ref_id":5782},{"sent":"red shirt","ref_id":5783},{"sent":"batter","ref_id":5784},{"sent":"girl on left","ref_id":5811},{"sent":"woman","ref_id":5812},{"sent":"bowl of white stuff on left","ref_id":5923},{"sent":"bowl of rice in the back","ref_id":5924},{"sent":"top right glass","ref_id":5925},{"sent":"top left corner","ref_id":5926},{"sent":"skier in middle","ref_id":6042},{"sent":"left skier","ref_id":6043},{"sent":"guy in white shirt","ref_id":6073},{"sent":"man","ref_id":6074},{"sent":"man in black","ref_id":6081},{"sent":"guy in white","ref_id":6082},{"sent":"umpire","ref_id":6118},{"sent":"batter","ref_id":6119},{"sent":"right guy","ref_id":6210},{"sent":"guy on right","ref_id":6211},{"sent":"kid","ref_id":6237},{"sent":"kid","ref_id":6238},{"sent":"baby","ref_id":6239},{"sent":"woman","ref_id":6257},{"sent":"person on right","ref_id":6461},{"sent":"woman","ref_id":6462},{"sent":"hand on left","ref_id":6607},{"sent":"left kid","ref_id":6698},{"sent":"player in red","ref_id":6699},{"sent":"player","ref_id":6799},{"sent":"guy in white","ref_id":6800},{"sent":"right person","ref_id":6809},{"sent":"guy in red","ref_id":6810},{"sent":"catcher","ref_id":6811},{"sent":"catcher","ref_id":6812},{"sent":"guy in white shirt","ref_id":6956},{"sent":"girl in blue","ref_id":6957},{"sent":"red shirt","ref_id":6958},{"sent":"bottom left corner","ref_id":7045},{"sent":"girl in black","ref_id":7046},{"sent":"girl","ref_id":7047},{"sent":"batter","ref_id":7170},{"sent":"batter","ref_id":7171},{"sent":"catcher","ref_id":7172},{"sent":"hand","ref_id":7211},{"sent":"the girl","ref_id":7212},{"sent":"umpire","ref_id":7249},{"sent":"catcher","ref_id":7250},{"sent":"catcher","ref_id":7251},{"sent":"man in white shirt","ref_id":7377},{"sent":"man on right","ref_id":7378},{"sent":"woman in white","ref_id":7379},{"sent":"girl","ref_id":7477},{"sent":"girl","ref_id":7478},{"sent":"man in red shirt","ref_id":7496},{"sent":"guy in red shirt","ref_id":7497},{"sent":"red shirt","ref_id":7498},{"sent":"red shirt","ref_id":7499},{"sent":"woman in blue","ref_id":7568},{"sent":"woman in white","ref_id":7569},{"sent":"man on left","ref_id":7570},{"sent":"woman on left","ref_id":7571},{"sent":"woman on right","ref_id":7572},{"sent":"man in blue shirt","ref_id":7573},{"sent":"man in blue","ref_id":7574},{"sent":"person in front of man in white shirt","ref_id":7575},{"sent":"chair on left","ref_id":7576},{"sent":"girl in white","ref_id":7608},{"sent":"left guy","ref_id":7609},{"sent":"man on right","ref_id":7610},{"sent":"boy in middle","ref_id":7611},{"sent":"man on right","ref_id":7612},{"sent":"man in front","ref_id":7613},{"sent":"right guy","ref_id":7710},{"sent":"right guy","ref_id":7711},{"sent":"man on left","ref_id":7712},{"sent":"man in blue shirt","ref_id":7742},{"sent":"man in middle","ref_id":7743},{"sent":"left guy","ref_id":7851},{"sent":"right guy","ref_id":7852},{"sent":"right woman","ref_id":7853},{"sent":"right guy","ref_id":7854},{"sent":"left most person","ref_id":7855},{"sent":"man in white shirt on left","ref_id":7856},{"sent":"man in front","ref_id":7857},{"sent":"girl on right","ref_id":7858},{"sent":"person on left","ref_id":7890},{"sent":"person on right","ref_id":7891},{"sent":"person in background","ref_id":7929},{"sent":"player","ref_id":7930},{"sent":"kid in red","ref_id":7967},{"sent":"girl in green","ref_id":7968},{"sent":"man in white shirt","ref_id":7969},{"sent":"man in white","ref_id":7970},{"sent":"girl","ref_id":7981},{"sent":"girl in pink","ref_id":7982},{"sent":"player in white","ref_id":8005},{"sent":"player in white","ref_id":8006},{"sent":"man in front","ref_id":8086},{"sent":"right person","ref_id":8087},{"sent":"left person","ref_id":8088},{"sent":"guy on bike","ref_id":8089},{"sent":"girl in white","ref_id":8116},{"sent":"player on left","ref_id":8117},{"sent":"player in red","ref_id":8118},{"sent":"right kid","ref_id":8119},{"sent":"right guy","ref_id":8214},{"sent":"second from left","ref_id":8215},{"sent":"second from right","ref_id":8216},{"sent":"left guy","ref_id":8217},{"sent":"man in red","ref_id":8247},{"sent":"woman in black","ref_id":8248},{"sent":"man on left","ref_id":8308},{"sent":"right guy","ref_id":8309},{"sent":"right guy","ref_id":8310},{"sent":"person on right","ref_id":8354},{"sent":"girl in white","ref_id":8355},{"sent":"horse on right","ref_id":8356},{"sent":"horse","ref_id":8357},{"sent":"right UNK","ref_id":8375},{"sent":"woman","ref_id":8376},{"sent":"right UNK","ref_id":8377},{"sent":"person on right","ref_id":8378},{"sent":"man on left","ref_id":8379},{"sent":"person on right","ref_id":8413},{"sent":"man on right","ref_id":8414},{"sent":"woman","ref_id":8415},{"sent":"bride","ref_id":8416},{"sent":"man in blue","ref_id":8490},{"sent":"girl in pink","ref_id":8491},{"sent":"girl in red","ref_id":8492},{"sent":"kid in red","ref_id":8500},{"sent":"number 5","ref_id":8501},{"sent":"guy in white","ref_id":8502},{"sent":"woman in white","ref_id":8685},{"sent":"kid in white","ref_id":8686},{"sent":"man","ref_id":8687},{"sent":"woman sitting down","ref_id":8694},{"sent":"man on right","ref_id":8695},{"sent":"man on right","ref_id":8696},{"sent":"woman on left","ref_id":8697},{"sent":"man in black","ref_id":8698},{"sent":"guy on right","ref_id":8699},{"sent":"left person","ref_id":8700},{"sent":"woman","ref_id":8701},{"sent":"guy on left","ref_id":8758},{"sent":"girl on right","ref_id":8759},{"sent":"woman on right","ref_id":8760},{"sent":"woman on left","ref_id":8770},{"sent":"woman","ref_id":8771},{"sent":"man on left","ref_id":8772},{"sent":"man in white","ref_id":8773},{"sent":"man on left","ref_id":8774},{"sent":"girl in pink","ref_id":8896},{"sent":"girl in white","ref_id":8897},{"sent":"man in black shirt","ref_id":8898},{"sent":"man in black shirt","ref_id":8899},{"sent":"person in white shirt","ref_id":9238},{"sent":"man in black shirt","ref_id":9239},{"sent":"woman on right","ref_id":9255},{"sent":"man in red","ref_id":9256},{"sent":"man in black","ref_id":9493},{"sent":"man","ref_id":9494},{"sent":"woman on left","ref_id":9509},{"sent":"woman in middle","ref_id":9510},{"sent":"man in white shirt","ref_id":9511},{"sent":"second from right","ref_id":9532},{"sent":"second from left","ref_id":9533},{"sent":"guy in white shirt","ref_id":9534},{"sent":"tennis player","ref_id":9760},{"sent":"man in blue","ref_id":9761},{"sent":"woman in middle","ref_id":9799},{"sent":"man in white shirt","ref_id":9800},{"sent":"man","ref_id":9856},{"sent":"woman sitting","ref_id":9857},{"sent":"woman","ref_id":9858},{"sent":"white car behind the guy in the background","ref_id":9865},{"sent":"blue shirt","ref_id":9866},{"sent":"batter","ref_id":9867},{"sent":"guy in background behind fence","ref_id":9949},{"sent":"batter","ref_id":9950},{"sent":"woman","ref_id":10015},{"sent":"girl","ref_id":10016},{"sent":"white mug","ref_id":10047},{"sent":"glass on left","ref_id":10048},{"sent":"woman","ref_id":10049},{"sent":"man on left","ref_id":10114},{"sent":"guy in white shirt","ref_id":10115},{"sent":"girl in pink dress","ref_id":10136},{"sent":"woman in pink","ref_id":10137},{"sent":"person in white shirt","ref_id":10138},{"sent":"left person","ref_id":10139},{"sent":"man in black on right","ref_id":10140},{"sent":"woman in pink","ref_id":10141},{"sent":"player in white","ref_id":10217},{"sent":"red shirt","ref_id":10218},{"sent":"guy in yellow","ref_id":10219},{"sent":"man on left","ref_id":10271},{"sent":"man on right","ref_id":10272},{"sent":"batter","ref_id":10332},{"sent":"right bear","ref_id":10374},{"sent":"bear in red","ref_id":10375},{"sent":"girl in pink","ref_id":10376},{"sent":"boy in middle","ref_id":10377},{"sent":"left guy","ref_id":10412},{"sent":"right guy","ref_id":10413},{"sent":"bottom right girl","ref_id":10449},{"sent":"bottom left head","ref_id":10450},{"sent":"man on left","ref_id":10554},{"sent":"woman","ref_id":10555},{"sent":"right guy","ref_id":10629},{"sent":"man on right","ref_id":10630},{"sent":"man in black shirt","ref_id":10749},{"sent":"man on left","ref_id":10750},{"sent":"woman eating","ref_id":10879},{"sent":"woman in white shirt","ref_id":10895},{"sent":"man in blue shirt","ref_id":10896},{"sent":"girl in red","ref_id":10995},{"sent":"woman","ref_id":10996},{"sent":"couch on right","ref_id":11019},{"sent":"dog","ref_id":11020},{"sent":"woman on left","ref_id":11021},{"sent":"man on right","ref_id":11022},{"sent":"left couch","ref_id":11023},{"sent":"right couch","ref_id":11024},{"sent":"right person","ref_id":11025},{"sent":"woman on left","ref_id":11026},{"sent":"man in middle","ref_id":11027},{"sent":"white pants","ref_id":11032},{"sent":"person in black","ref_id":11033},{"sent":"right kid","ref_id":11101},{"sent":"right kid","ref_id":11102},{"sent":"kid in middle","ref_id":11103},{"sent":"right kid","ref_id":11104},{"sent":"middle person","ref_id":11159},{"sent":"woman on left","ref_id":11160},{"sent":"woman in front","ref_id":11161},{"sent":"man in black shirt and jeans","ref_id":11203},{"sent":"man in blue shirt","ref_id":11204},{"sent":"woman in black","ref_id":11205},{"sent":"catcher","ref_id":11224},{"sent":"batter","ref_id":11225},{"sent":"arm","ref_id":11372},{"sent":"arm","ref_id":11373},{"sent":"man in white","ref_id":11409},{"sent":"guy in red","ref_id":11410},{"sent":"batter","ref_id":11411},{"sent":"woman in red","ref_id":11541},{"sent":"woman on left","ref_id":11637},{"sent":"man on right","ref_id":11638},{"sent":"woman in black","ref_id":11639},{"sent":"person on left","ref_id":11676},{"sent":"person on left","ref_id":11677},{"sent":"girl on left","ref_id":11752},{"sent":"woman","ref_id":11753},{"sent":"man on left","ref_id":11757},{"sent":"right sheep","ref_id":11758},{"sent":"person in front","ref_id":11769},{"sent":"arm","ref_id":11770},{"sent":"right guy","ref_id":11894},{"sent":"blue shirt","ref_id":11895},{"sent":"catcher","ref_id":12001},{"sent":"batter","ref_id":12002},{"sent":"woman on left","ref_id":12023},{"sent":"man","ref_id":12024},{"sent":"second from right","ref_id":12025},{"sent":"man on right","ref_id":12026},{"sent":"man on right","ref_id":12027},{"sent":"second from left","ref_id":12028},{"sent":"man on left","ref_id":12029},{"sent":"second from right","ref_id":12030},{"sent":"woman in red","ref_id":12261},{"sent":"right front animal","ref_id":12262},{"sent":"front cow","ref_id":12263},{"sent":"man in white","ref_id":12357},{"sent":"man","ref_id":12358},{"sent":"person on right","ref_id":12665},{"sent":"woman","ref_id":12666},{"sent":"man in blue","ref_id":12719},{"sent":"woman in red","ref_id":12720},{"sent":"woman in black","ref_id":12721},{"sent":"woman","ref_id":12758},{"sent":"man in blue","ref_id":12759},{"sent":"girl on right","ref_id":12963},{"sent":"woman","ref_id":12964},{"sent":"man on left","ref_id":13055},{"sent":"man on left","ref_id":13056},{"sent":"man on right","ref_id":13057},{"sent":"woman","ref_id":13087},{"sent":"man","ref_id":13088},{"sent":"girl in white","ref_id":13209},{"sent":"man on left","ref_id":13210},{"sent":"bowl of food in front","ref_id":13236},{"sent":"chair top left","ref_id":13237},{"sent":"bowl of food in front","ref_id":13238},{"sent":"top right corner","ref_id":13239},{"sent":"top right corner","ref_id":13373},{"sent":"table top left","ref_id":13374},{"sent":"guy in white shirt","ref_id":13382},{"sent":"guy in white","ref_id":13383},{"sent":"guy on right","ref_id":13386},{"sent":"man on left","ref_id":13387},{"sent":"table cloth","ref_id":13410},{"sent":"woman on left","ref_id":13411},{"sent":"girl on right","ref_id":13412},{"sent":"guy in white shirt","ref_id":13439},{"sent":"man in blue shirt","ref_id":13440},{"sent":"girl in white","ref_id":13441},{"sent":"man in white shirt","ref_id":13442},{"sent":"girl in middle","ref_id":13625},{"sent":"girl on left","ref_id":13626},{"sent":"girl in pink","ref_id":13627},{"sent":"man in blue shirt","ref_id":13793},{"sent":"woman in red","ref_id":13794},{"sent":"girl","ref_id":13869},{"sent":"woman","ref_id":13870},{"sent":"person on left","ref_id":13894},{"sent":"woman in black","ref_id":13895},{"sent":"top left corner","ref_id":14024},{"sent":"left pizza","ref_id":14025},{"sent":"right pizza","ref_id":14026},{"sent":"top right corner","ref_id":14027},{"sent":"white shirt left","ref_id":14038},{"sent":"person in red","ref_id":14039},{"sent":"left bed","ref_id":14040},{"sent":"right person","ref_id":14041},{"sent":"red thing on left","ref_id":14042},{"sent":"person on left","ref_id":14102},{"sent":"person on right","ref_id":14103},{"sent":"person in front","ref_id":14104},{"sent":"person in middle","ref_id":14105},{"sent":"person in blue","ref_id":14106},{"sent":"left chair","ref_id":14201},{"sent":"woman","ref_id":14202},{"sent":"chair on left","ref_id":14203},{"sent":"chair in front of woman","ref_id":14204},{"sent":"man in front","ref_id":14270},{"sent":"man in white shirt","ref_id":14271},{"sent":"bike on right","ref_id":14272},{"sent":"bike in front","ref_id":14273},{"sent":"batter","ref_id":14274},{"sent":"batter","ref_id":14275},{"sent":"umpire","ref_id":14276},{"sent":"batter","ref_id":14277},{"sent":"man","ref_id":14316},{"sent":"woman","ref_id":14317},{"sent":"kid on right","ref_id":14352},{"sent":"kid in middle","ref_id":14353},{"sent":"girl in pink","ref_id":14377},{"sent":"girl on left","ref_id":14378},{"sent":"girl","ref_id":14379},{"sent":"woman in pink","ref_id":14380},{"sent":"woman","ref_id":14482},{"sent":"woman","ref_id":14483},{"sent":"woman in black","ref_id":14519},{"sent":"man in middle","ref_id":14520},{"sent":"hand holding phone","ref_id":14521},{"sent":"woman in black","ref_id":14522},{"sent":"woman in green","ref_id":14523},{"sent":"woman in black","ref_id":14524},{"sent":"man on left","ref_id":14601},{"sent":"left guy","ref_id":14602},{"sent":"guy in black shirt","ref_id":14603},{"sent":"right person","ref_id":14694},{"sent":"guy in blue","ref_id":14695},{"sent":"umpire","ref_id":14755},{"sent":"catcher","ref_id":14756},{"sent":"batter","ref_id":14757},{"sent":"right person","ref_id":14855},{"sent":"left person","ref_id":14856},{"sent":"person on right","ref_id":14857},{"sent":"man in black shirt","ref_id":14869},{"sent":"man in white","ref_id":14870},{"sent":"kid on left","ref_id":14883},{"sent":"kid on right","ref_id":14884},{"sent":"right guy","ref_id":14940},{"sent":"girl in white","ref_id":14941},{"sent":"man on left","ref_id":14968},{"sent":"red bus","ref_id":14969},{"sent":"man in white shirt","ref_id":14981},{"sent":"man in white","ref_id":14982},{"sent":"girl in white shirt","ref_id":15085},{"sent":"man in white shirt","ref_id":15086},{"sent":"girl in white","ref_id":15087},{"sent":"white table in front of girl","ref_id":15088},{"sent":"arm on left","ref_id":15089},{"sent":"right person","ref_id":15092},{"sent":"woman in white","ref_id":15093},{"sent":"man in blue shirt","ref_id":15094},{"sent":"woman on left","ref_id":15253},{"sent":"person on right","ref_id":15254},{"sent":"white shirt","ref_id":15255},{"sent":"woman in white","ref_id":15342},{"sent":"man","ref_id":15343},{"sent":"hand on left","ref_id":15348},{"sent":"right person","ref_id":15349},{"sent":"woman on left","ref_id":15366},{"sent":"left person","ref_id":15367},{"sent":"woman on right","ref_id":15368},{"sent":"person in white shirt","ref_id":15369},{"sent":"person on left","ref_id":15370},{"sent":"guy","ref_id":15394},{"sent":"blue shirt","ref_id":15432},{"sent":"baby","ref_id":15433},{"sent":"woman in red","ref_id":15555},{"sent":"woman in blue","ref_id":15556},{"sent":"left person","ref_id":15563},{"sent":"man in white","ref_id":15564},{"sent":"woman","ref_id":15699},{"sent":"bottom left corner","ref_id":15754},{"sent":"man in white","ref_id":15755},{"sent":"right guy","ref_id":15825},{"sent":"man in suit","ref_id":15826},{"sent":"man on right","ref_id":15986},{"sent":"man","ref_id":15987},{"sent":"top right corner","ref_id":16068},{"sent":"tennis player","ref_id":16069},{"sent":"girl","ref_id":16077},{"sent":"man","ref_id":16078},{"sent":"left woman","ref_id":16126},{"sent":"woman on left","ref_id":16127},{"sent":"woman in middle","ref_id":16128},{"sent":"woman in middle","ref_id":16129},{"sent":"man in black suit","ref_id":16130},{"sent":"man in middle","ref_id":16131},{"sent":"woman","ref_id":16200},{"sent":"woman","ref_id":16201},{"sent":"right player","ref_id":16425},{"sent":"left player","ref_id":16426},{"sent":"right person","ref_id":16543},{"sent":"right person","ref_id":16544},{"sent":"guy in red","ref_id":16545},{"sent":"left blue","ref_id":16566},{"sent":"pink","ref_id":16567},{"sent":"person in white","ref_id":16568},{"sent":"person in front","ref_id":16569},{"sent":"horse","ref_id":16636},{"sent":"man in front","ref_id":16732},{"sent":"man in white","ref_id":16738},{"sent":"right guy","ref_id":16739},{"sent":"right person","ref_id":16740},{"sent":"man on left","ref_id":16741},{"sent":"right guy","ref_id":16786},{"sent":"red shirt","ref_id":16787},{"sent":"man in black","ref_id":16788},{"sent":"man in middle","ref_id":16804},{"sent":"woman on left","ref_id":16805},{"sent":"man in suit","ref_id":16892},{"sent":"top left corner","ref_id":16896},{"sent":"hand on left","ref_id":16897},{"sent":"person on left","ref_id":16898},{"sent":"head of person in front of girl","ref_id":17039},{"sent":"girl","ref_id":17040},{"sent":"left guy","ref_id":17138},{"sent":"woman in black","ref_id":17139},{"sent":"catcher","ref_id":17322},{"sent":"umpire","ref_id":17323},{"sent":"batter","ref_id":17324},{"sent":"man","ref_id":17488},{"sent":"girl","ref_id":17489},{"sent":"top right corner","ref_id":17497},{"sent":"person on left","ref_id":17498},{"sent":"white sheep","ref_id":17523},{"sent":"man on right","ref_id":17524},{"sent":"man on left","ref_id":17545},{"sent":"man on right","ref_id":17546},{"sent":"left person","ref_id":17579},{"sent":"right guy","ref_id":17580},{"sent":"catcher","ref_id":17622},{"sent":"player","ref_id":17623},{"sent":"right side of pizza","ref_id":17629},{"sent":"baby","ref_id":17630},{"sent":"woman on left","ref_id":17643},{"sent":"girl in front","ref_id":17644},{"sent":"woman in black","ref_id":17715},{"sent":"woman in black","ref_id":17716},{"sent":"man on left","ref_id":17717},{"sent":"man in white shirt","ref_id":17731},{"sent":"left guy","ref_id":17732},{"sent":"woman in middle","ref_id":17906},{"sent":"woman in white","ref_id":17907},{"sent":"batter","ref_id":17974},{"sent":"catcher","ref_id":17975},{"sent":"catcher","ref_id":17976},{"sent":"batter","ref_id":17977},{"sent":"guy on right","ref_id":17986},{"sent":"guy","ref_id":17987},{"sent":"woman in white shirt","ref_id":18064},{"sent":"woman in white shirt","ref_id":18065},{"sent":"man in black shirt","ref_id":18066},{"sent":"man on right","ref_id":18067},{"sent":"woman in black","ref_id":18127},{"sent":"woman in black","ref_id":18128},{"sent":"bride","ref_id":18162},{"sent":"woman","ref_id":18163},{"sent":"man","ref_id":18164},{"sent":"right couch","ref_id":18167},{"sent":"man on left","ref_id":18168},{"sent":"right couch","ref_id":18169},{"sent":"left couch","ref_id":18170},{"sent":"man on right","ref_id":18274},{"sent":"table in front of man","ref_id":18275},{"sent":"man in blue","ref_id":18276},{"sent":"man on right","ref_id":18277},{"sent":"bottom right corner","ref_id":18278},{"sent":"right guy","ref_id":18297},{"sent":"tie","ref_id":18298},{"sent":"man in white","ref_id":18325},{"sent":"man in background","ref_id":18326},{"sent":"umbrella","ref_id":18362},{"sent":"person on left","ref_id":18363},{"sent":"pink umbrella","ref_id":18364},{"sent":"girl in pink","ref_id":18448},{"sent":"girl in pink","ref_id":18449},{"sent":"right guy","ref_id":18488},{"sent":"man on right","ref_id":18489},{"sent":"man in blue shirt","ref_id":18490},{"sent":"kid in white shirt","ref_id":18584},{"sent":"guy in white shirt","ref_id":18585},{"sent":"red shirt","ref_id":18586},{"sent":"guy on left","ref_id":18701},{"sent":"left guy","ref_id":18702},{"sent":"guy on right","ref_id":18703},{"sent":"guy in front","ref_id":18704},{"sent":"person on left","ref_id":18738},{"sent":"woman","ref_id":18739},{"sent":"woman","ref_id":18740},{"sent":"woman on right","ref_id":18798},{"sent":"man on right","ref_id":18799},{"sent":"woman in black","ref_id":18800},{"sent":"person in white","ref_id":18804},{"sent":"right guy","ref_id":18805},{"sent":"right UNK","ref_id":18806},{"sent":"right person","ref_id":18846},{"sent":"left person","ref_id":18847},{"sent":"man in white","ref_id":18888},{"sent":"guy on right","ref_id":18889},{"sent":"person on left","ref_id":18912},{"sent":"man","ref_id":18913},{"sent":"right guy","ref_id":18914},{"sent":"red shirt","ref_id":18931},{"sent":"white shirt right","ref_id":18932},{"sent":"person in background in background","ref_id":18933},{"sent":"blurry person in background behind the tennis player","ref_id":18934},{"sent":"blurry person in background on left","ref_id":18935},{"sent":"guy in red shirt","ref_id":18936},{"sent":"guy in white shirt","ref_id":18937},{"sent":"tennis player","ref_id":18938},{"sent":"girl","ref_id":19008},{"sent":"person on left","ref_id":19040},{"sent":"girl in yellow","ref_id":19062},{"sent":"bottom left head","ref_id":19063},{"sent":"man on left","ref_id":19064},{"sent":"right bottom corner","ref_id":19132},{"sent":"person in white on left","ref_id":19133},{"sent":"man in black","ref_id":19134},{"sent":"bottom left corner","ref_id":19279},{"sent":"man","ref_id":19280},{"sent":"girl on right","ref_id":19325},{"sent":"kid","ref_id":19326},{"sent":"girl on right","ref_id":19327},{"sent":"man in blue","ref_id":19348},{"sent":"man on right","ref_id":19349},{"sent":"man on left","ref_id":19428},{"sent":"guy in middle","ref_id":19429},{"sent":"guy on right","ref_id":19430},{"sent":"woman in pink","ref_id":19433},{"sent":"man in blue shirt","ref_id":19434},{"sent":"girl in red","ref_id":19448},{"sent":"left girl","ref_id":19449},{"sent":"girl in red","ref_id":19450},{"sent":"right guy","ref_id":19451},{"sent":"girl on right","ref_id":19469},{"sent":"pizza on right","ref_id":19470},{"sent":"girl","ref_id":19471},{"sent":"woman in white","ref_id":19509},{"sent":"man in black","ref_id":19510},{"sent":"man in front","ref_id":19511},{"sent":"batter","ref_id":19512},{"sent":"guy in blue shirt behind fence","ref_id":19513},{"sent":"person in background on left","ref_id":19514},{"sent":"red shirt","ref_id":19515},{"sent":"man in red","ref_id":19516},{"sent":"person in red","ref_id":19517},{"sent":"woman on left","ref_id":19543},{"sent":"person in background","ref_id":19634},{"sent":"kid","ref_id":19635},{"sent":"left person","ref_id":19684},{"sent":"woman","ref_id":19685},{"sent":"woman in pink","ref_id":19686},{"sent":"woman on right","ref_id":19687},{"sent":"woman","ref_id":19688},{"sent":"girl on right","ref_id":19732},{"sent":"left guy","ref_id":19733},{"sent":"arm","ref_id":19743},{"sent":"right cake","ref_id":19744},{"sent":"cake","ref_id":19745},{"sent":"the arm on the left","ref_id":19746},{"sent":"bottom left hand","ref_id":19843},{"sent":"girl","ref_id":19844},{"sent":"right person","ref_id":19901},{"sent":"left person","ref_id":19902},{"sent":"red jacket","ref_id":19903},{"sent":"second from right","ref_id":19904},{"sent":"woman in red","ref_id":19941},{"sent":"woman on left","ref_id":19942},{"sent":"woman in white","ref_id":20041},{"sent":"woman on right","ref_id":20042},{"sent":"laptop on left","ref_id":20099},{"sent":"middle laptop","ref_id":20100},{"sent":"man in white shirt","ref_id":20101},{"sent":"left laptop","ref_id":20102},{"sent":"man in middle","ref_id":20103},{"sent":"man","ref_id":20246},{"sent":"woman","ref_id":20247},{"sent":"man on left","ref_id":20268},{"sent":"kid","ref_id":20269},{"sent":"right pizza","ref_id":20311},{"sent":"pizza on left","ref_id":20312},{"sent":"pizza on right","ref_id":20313},{"sent":"top right corner","ref_id":20314},{"sent":"arm in back","ref_id":20315},{"sent":"left person","ref_id":20389},{"sent":"kid","ref_id":20390},{"sent":"man in middle","ref_id":20420},{"sent":"man on left","ref_id":20421},{"sent":"top right corner","ref_id":20454},{"sent":"hand","ref_id":20455},{"sent":"woman on left","ref_id":20469},{"sent":"man in middle","ref_id":20470},{"sent":"woman on right","ref_id":20471},{"sent":"woman on right","ref_id":20479},{"sent":"man in front","ref_id":20480},{"sent":"woman","ref_id":20505},{"sent":"woman on left","ref_id":20506},{"sent":"man on left","ref_id":20512},{"sent":"girl in red","ref_id":20513},{"sent":"girl in blue","ref_id":20514},{"sent":"man","ref_id":20602},{"sent":"bride","ref_id":20603},{"sent":"man in black","ref_id":20649},{"sent":"man in black","ref_id":20650},{"sent":"person in front","ref_id":20663},{"sent":"left person","ref_id":20664},{"sent":"person on right","ref_id":20665},{"sent":"white shirt","ref_id":20666},{"sent":"person on right","ref_id":20667},{"sent":"bottom of the UNK","ref_id":20668},{"sent":"black thing on top of suitcase","ref_id":20755},{"sent":"legs on right","ref_id":20756},{"sent":"left leg","ref_id":20757},{"sent":"kid","ref_id":20791},{"sent":"girl","ref_id":20792},{"sent":"man on left","ref_id":20875},{"sent":"man on right","ref_id":20876},{"sent":"woman in middle","ref_id":20877},{"sent":"man in black","ref_id":20938},{"sent":"person on right","ref_id":20939},{"sent":"giraffe","ref_id":20940},{"sent":"giraffe on right","ref_id":20941},{"sent":"man on right","ref_id":20942},{"sent":"man in black","ref_id":20943},{"sent":"left person","ref_id":20944},{"sent":"kid in front","ref_id":20945},{"sent":"batter","ref_id":20946},{"sent":"catcher","ref_id":20947},{"sent":"umpire","ref_id":20948},{"sent":"catcher","ref_id":20954},{"sent":"top left corner","ref_id":20977},{"sent":"person in back","ref_id":20978},{"sent":"baby","ref_id":21019},{"sent":"baby","ref_id":21020},{"sent":"sheep in front","ref_id":21081},{"sent":"right kid","ref_id":21082},{"sent":"girl in pink","ref_id":21083},{"sent":"sheep in front","ref_id":21084},{"sent":"girl on left","ref_id":21122},{"sent":"man on right","ref_id":21123},{"sent":"man in black","ref_id":21124},{"sent":"woman in white","ref_id":21125},{"sent":"left guy","ref_id":21190},{"sent":"woman","ref_id":21191},{"sent":"man","ref_id":21292},{"sent":"white tie","ref_id":21293},{"sent":"UNK","ref_id":21294},{"sent":"left tie","ref_id":21295},{"sent":"left person","ref_id":21296},{"sent":"woman in white","ref_id":21302},{"sent":"man in white","ref_id":21303},{"sent":"girl on left","ref_id":21410},{"sent":"girl on right","ref_id":21411},{"sent":"number 18","ref_id":21422},{"sent":"man in blue shirt","ref_id":21423},{"sent":"number 2","ref_id":21424},{"sent":"left player","ref_id":21425},{"sent":"number 18","ref_id":21426},{"sent":"second from left","ref_id":21433},{"sent":"second board from right","ref_id":21434},{"sent":"right person","ref_id":21435},{"sent":"second from right","ref_id":21436},{"sent":"second from left","ref_id":21437},{"sent":"middle person","ref_id":21438},{"sent":"left person","ref_id":21439},{"sent":"man on left","ref_id":21440},{"sent":"right girl","ref_id":21444},{"sent":"man in white","ref_id":21525},{"sent":"person on right","ref_id":21580},{"sent":"person on left","ref_id":21581},{"sent":"man in red shirt","ref_id":21607},{"sent":"man on left","ref_id":21608},{"sent":"woman in front","ref_id":21609},{"sent":"pizza slice on left","ref_id":21616},{"sent":"pizza slice","ref_id":21617},{"sent":"hand on left","ref_id":21618},{"sent":"white shirt upper right","ref_id":21619},{"sent":"bottom left corner","ref_id":21798},{"sent":"woman","ref_id":21799},{"sent":"girl in blue","ref_id":22059},{"sent":"girl in pink","ref_id":22060},{"sent":"kid on right","ref_id":22061},{"sent":"girl in pink","ref_id":22062},{"sent":"girl in pink","ref_id":22063},{"sent":"girl in white","ref_id":22088},{"sent":"left guy","ref_id":22089},{"sent":"guy in blue","ref_id":22117},{"sent":"red shirt","ref_id":22118},{"sent":"woman","ref_id":22475},{"sent":"girl in yellow","ref_id":22476},{"sent":"right racket","ref_id":22477},{"sent":"man in blue shirt","ref_id":22504},{"sent":"woman on left","ref_id":22505},{"sent":"woman on right","ref_id":22659},{"sent":"woman on left","ref_id":22660},{"sent":"woman in black","ref_id":22715},{"sent":"woman in black","ref_id":22716},{"sent":"man in white","ref_id":22717},{"sent":"woman on right","ref_id":22718},{"sent":"person in white shirt","ref_id":22796},{"sent":"woman in white","ref_id":22797},{"sent":"left person","ref_id":22798},{"sent":"right guy","ref_id":22862},{"sent":"left player","ref_id":22863},{"sent":"woman","ref_id":23015},{"sent":"woman in black","ref_id":23016},{"sent":"person on left","ref_id":23077},{"sent":"man","ref_id":23078},{"sent":"red shirt","ref_id":23129},{"sent":"red shirt","ref_id":23130},{"sent":"girl in pink","ref_id":23131},{"sent":"woman on right","ref_id":23179},{"sent":"woman in black","ref_id":23180},{"sent":"man on left","ref_id":23192},{"sent":"man on right","ref_id":23193},{"sent":"man in white shirt","ref_id":23194},{"sent":"man on left","ref_id":23235},{"sent":"woman in white","ref_id":23236},{"sent":"woman in middle","ref_id":23237},{"sent":"man in middle","ref_id":23249},{"sent":"man on right","ref_id":23250},{"sent":"man on left","ref_id":23251},{"sent":"left person","ref_id":23254},{"sent":"man in white shirt","ref_id":23255},{"sent":"right person","ref_id":23256},{"sent":"catcher","ref_id":23358},{"sent":"umpire","ref_id":23359},{"sent":"man in white","ref_id":23410},{"sent":"woman in red","ref_id":23411},{"sent":"girl in pink","ref_id":23412},{"sent":"baby","ref_id":23564},{"sent":"baby","ref_id":23565},{"sent":"bottom right bowl","ref_id":23652},{"sent":"woman in white","ref_id":23653},{"sent":"woman in white","ref_id":23654},{"sent":"top right corner","ref_id":23857},{"sent":"man in white shirt","ref_id":23863},{"sent":"guy in blue","ref_id":23864},{"sent":"guy in blue shirt","ref_id":23865},{"sent":"girl in yellow","ref_id":23866},{"sent":"man on left","ref_id":23890},{"sent":"woman on right","ref_id":23891},{"sent":"girl","ref_id":23910},{"sent":"girl","ref_id":23911},{"sent":"woman in red","ref_id":23919},{"sent":"person in front","ref_id":23920},{"sent":"right guy","ref_id":24045},{"sent":"woman in white","ref_id":24046},{"sent":"woman in middle","ref_id":24058},{"sent":"man in middle","ref_id":24059},{"sent":"man in white shirt","ref_id":24060},{"sent":"man on right","ref_id":24061},{"sent":"man on left","ref_id":24062},{"sent":"woman in purple","ref_id":24063},{"sent":"person on right","ref_id":24064},{"sent":"woman in middle","ref_id":24237},{"sent":"baby","ref_id":24238},{"sent":"left guy","ref_id":24265},{"sent":"man on right","ref_id":24266},{"sent":"man in black","ref_id":24267},{"sent":"right player","ref_id":24297},{"sent":"left player","ref_id":24298},{"sent":"man on left","ref_id":24320},{"sent":"man in white","ref_id":24321},{"sent":"man on right","ref_id":24322},{"sent":"man on right","ref_id":24323},{"sent":"man in blue","ref_id":24369},{"sent":"left glass","ref_id":24370},{"sent":"man in red","ref_id":24371},{"sent":"left guy","ref_id":24381},{"sent":"person on right","ref_id":24382},{"sent":"man on right","ref_id":24396},{"sent":"man in black","ref_id":24397},{"sent":"girl on left","ref_id":24454},{"sent":"girl in white","ref_id":24455},{"sent":"man on right","ref_id":24456},{"sent":"man in white","ref_id":24491},{"sent":"person on left","ref_id":24492},{"sent":"woman on right","ref_id":24493},{"sent":"man in blue shirt","ref_id":24510},{"sent":"woman in back","ref_id":24511},{"sent":"woman","ref_id":24512},{"sent":"man in white","ref_id":24513},{"sent":"right person","ref_id":24525},{"sent":"left guy","ref_id":24526},{"sent":"boy in white","ref_id":24527},{"sent":"right hot dog","ref_id":24891},{"sent":"hot dog on left","ref_id":24892},{"sent":"hand on left","ref_id":24893},{"sent":"arm on right","ref_id":24894},{"sent":"man on left","ref_id":24895},{"sent":"man in red shirt","ref_id":24896},{"sent":"woman in white","ref_id":24897},{"sent":"left guy","ref_id":24938},{"sent":"girl in white","ref_id":24939},{"sent":"white shirt","ref_id":25002},{"sent":"man in black shirt on left","ref_id":25003},{"sent":"man in white shirt","ref_id":25004},{"sent":"person in white shirt","ref_id":25077},{"sent":"man","ref_id":25078},{"sent":"person on right","ref_id":25334},{"sent":"man","ref_id":25335},{"sent":"woman","ref_id":25359},{"sent":"man","ref_id":25360},{"sent":"top left black shirt","ref_id":25386},{"sent":"top right corner","ref_id":25387},{"sent":"batter","ref_id":25419},{"sent":"batter","ref_id":25420},{"sent":"right player","ref_id":25471},{"sent":"left player","ref_id":25472},{"sent":"person on left","ref_id":25546},{"sent":"guy in white","ref_id":25547},{"sent":"kid","ref_id":25548},{"sent":"woman","ref_id":25631},{"sent":"woman in white","ref_id":25632},{"sent":"man on right","ref_id":25753},{"sent":"boy in yellow","ref_id":25754},{"sent":"white shirt","ref_id":25800},{"sent":"woman","ref_id":25801},{"sent":"hand","ref_id":25802},{"sent":"umpire","ref_id":25818},{"sent":"batter","ref_id":25819},{"sent":"man","ref_id":26042},{"sent":"girl","ref_id":26043},{"sent":"man on right","ref_id":26086},{"sent":"man in black","ref_id":26087},{"sent":"man in black shirt","ref_id":26088},{"sent":"man on left","ref_id":26089},{"sent":"left most person","ref_id":26263},{"sent":"second board from right","ref_id":26264},{"sent":"man on left","ref_id":26265},{"sent":"guy in middle","ref_id":26266},{"sent":"person in middle","ref_id":26267},{"sent":"man","ref_id":26498},{"sent":"woman","ref_id":26499},{"sent":"guy on right","ref_id":26509},{"sent":"kid in white","ref_id":26510},{"sent":"person on right","ref_id":26571},{"sent":"person on right","ref_id":26572},{"sent":"batter","ref_id":26628},{"sent":"umpire","ref_id":26629},{"sent":"catcher","ref_id":26630},{"sent":"person on left","ref_id":26631},{"sent":"man in white shirt","ref_id":26632},{"sent":"woman in black dress","ref_id":26633},{"sent":"woman in black","ref_id":26634},{"sent":"person in white shirt","ref_id":26684},{"sent":"man in white","ref_id":26685},{"sent":"man in middle","ref_id":26686},{"sent":"woman in front","ref_id":26698},{"sent":"woman","ref_id":26699},{"sent":"man in front","ref_id":26744},{"sent":"right person","ref_id":26745},{"sent":"table on right","ref_id":26749},{"sent":"right kid","ref_id":26750},{"sent":"boy in white shirt","ref_id":26751},{"sent":"left table","ref_id":26752},{"sent":"person on right","ref_id":26856},{"sent":"left person","ref_id":26857},{"sent":"bottom left head","ref_id":26858},{"sent":"guy in white","ref_id":26877},{"sent":"guy in black","ref_id":26878},{"sent":"man in front","ref_id":26977},{"sent":"person on left","ref_id":26978},{"sent":"kid in red","ref_id":27212},{"sent":"kid in red","ref_id":27213},{"sent":"left person","ref_id":27366},{"sent":"bottom left corner","ref_id":27367},{"sent":"bottom sandwich","ref_id":27368},{"sent":"boy in blue","ref_id":27369},{"sent":"man in blue","ref_id":27370},{"sent":"person on right","ref_id":27447},{"sent":"person on left","ref_id":27448},{"sent":"man on left with hat","ref_id":27489},{"sent":"horse in front","ref_id":27490},{"sent":"horse on right","ref_id":27491},{"sent":"man in front with blue hat","ref_id":27492},{"sent":"horse on left","ref_id":27493},{"sent":"man in blue shirt","ref_id":27550},{"sent":"man in white","ref_id":27551},{"sent":"person on right","ref_id":27643},{"sent":"person in middle","ref_id":27644},{"sent":"chef on right","ref_id":27684},{"sent":"left chef","ref_id":27685},{"sent":"bottom right phone","ref_id":27717},{"sent":"bottom left hand","ref_id":27718},{"sent":"man on right","ref_id":27728},{"sent":"man in white shirt","ref_id":27729},{"sent":"man","ref_id":27775},{"sent":"bride","ref_id":27776},{"sent":"man in white shirt","ref_id":27946},{"sent":"woman in white","ref_id":27947},{"sent":"blue shirt","ref_id":27948},{"sent":"woman in white","ref_id":27949},{"sent":"bottom right corner","ref_id":27950},{"sent":"man on right","ref_id":27957},{"sent":"man","ref_id":27958},{"sent":"person in front","ref_id":28059},{"sent":"person in middle","ref_id":28060},{"sent":"the umbrella","ref_id":28061},{"sent":"man in white","ref_id":28068},{"sent":"man in blue","ref_id":28069},{"sent":"man in white shirt","ref_id":28070},{"sent":"person on left","ref_id":28146},{"sent":"man in white","ref_id":28147},{"sent":"person in blue","ref_id":28386},{"sent":"person in front","ref_id":28387},{"sent":"woman","ref_id":28388},{"sent":"man on right","ref_id":28389},{"sent":"woman","ref_id":28390},{"sent":"catcher","ref_id":28414},{"sent":"batter","ref_id":28415},{"sent":"baby","ref_id":28479},{"sent":"baby","ref_id":28480},{"sent":"left guy","ref_id":28761},{"sent":"right person","ref_id":28762},{"sent":"right skier","ref_id":28788},{"sent":"middle person","ref_id":28789},{"sent":"blue shirt","ref_id":28790},{"sent":"girl in pink","ref_id":28791},{"sent":"baby","ref_id":28792},{"sent":"arm on left","ref_id":28825},{"sent":"arm on right","ref_id":28826},{"sent":"man in white shirt","ref_id":28827},{"sent":"woman in black","ref_id":28874},{"sent":"boy in blue shirt","ref_id":28875},{"sent":"man on left","ref_id":29002},{"sent":"right guy","ref_id":29003},{"sent":"person on left","ref_id":29017},{"sent":"person on right","ref_id":29018},{"sent":"man in white shirt","ref_id":29100},{"sent":"guy in red shirt","ref_id":29101},{"sent":"player in front","ref_id":29102},{"sent":"girl","ref_id":29236},{"sent":"left girl","ref_id":29237},{"sent":"person on right","ref_id":29341},{"sent":"tennis player","ref_id":29342},{"sent":"guy on bike","ref_id":29390},{"sent":"left bike","ref_id":29391},{"sent":"bike on right","ref_id":29392},{"sent":"bike","ref_id":29393},{"sent":"left","ref_id":29417},{"sent":"woman","ref_id":29418},{"sent":"batter","ref_id":29448},{"sent":"batter","ref_id":29449},{"sent":"man in blue shirt","ref_id":29536},{"sent":"man in white shirt","ref_id":29537},{"sent":"man in white","ref_id":29538},{"sent":"right person","ref_id":29560},{"sent":"tennis player","ref_id":29561},{"sent":"left person","ref_id":29637},{"sent":"right person","ref_id":29638},{"sent":"woman in front","ref_id":29639},{"sent":"woman in black","ref_id":29640},{"sent":"person on right","ref_id":29677},{"sent":"left bed","ref_id":29678},{"sent":"woman on left","ref_id":29811},{"sent":"man","ref_id":29812},{"sent":"person in front","ref_id":29882},{"sent":"woman","ref_id":29883},{"sent":"girl","ref_id":29908},{"sent":"cake in front of cake","ref_id":29909},{"sent":"baby","ref_id":29910},{"sent":"man in black shirt","ref_id":29956},{"sent":"kid in white","ref_id":29957},{"sent":"man on left","ref_id":30096},{"sent":"guy in back","ref_id":30097},{"sent":"man in white shirt","ref_id":30098},{"sent":"man","ref_id":30099},{"sent":"woman in red","ref_id":30357},{"sent":"person on right","ref_id":30358},{"sent":"man","ref_id":30469},{"sent":"kid","ref_id":30470},{"sent":"woman","ref_id":30495},{"sent":"man","ref_id":30496},{"sent":"left person","ref_id":30516},{"sent":"woman","ref_id":30517},{"sent":"right person","ref_id":30525},{"sent":"girl","ref_id":30526},{"sent":"right girl","ref_id":30556},{"sent":"second from left","ref_id":30557},{"sent":"girl on right","ref_id":30558},{"sent":"girl in white","ref_id":30559},{"sent":"girl in middle","ref_id":30560},{"sent":"girl on left","ref_id":30561},{"sent":"right girl","ref_id":30562},{"sent":"second from left","ref_id":30563},{"sent":"man on right","ref_id":30637},{"sent":"right laptop","ref_id":30638},{"sent":"bottom left laptop","ref_id":30639},{"sent":"woman on left","ref_id":30640},{"sent":"man","ref_id":30676},{"sent":"man on right","ref_id":30677},{"sent":"left person","ref_id":30731},{"sent":"person in black","ref_id":30732},{"sent":"left guy","ref_id":30769},{"sent":"right guy","ref_id":30770},{"sent":"arm on left","ref_id":30803},{"sent":"girl in red","ref_id":30804},{"sent":"man on left","ref_id":30805},{"sent":"hand holding scissors","ref_id":30888},{"sent":"hand","ref_id":30889},{"sent":"left girl","ref_id":30959},{"sent":"baby","ref_id":30960},{"sent":"left bench","ref_id":31198},{"sent":"right bike","ref_id":31199},{"sent":"man in white shirt","ref_id":31201},{"sent":"person on right","ref_id":31202},{"sent":"woman","ref_id":31203},{"sent":"woman on right","ref_id":31206},{"sent":"woman","ref_id":31207},{"sent":"batter","ref_id":31356},{"sent":"catcher","ref_id":31357},{"sent":"batter","ref_id":31358},{"sent":"left edge of pic","ref_id":31459},{"sent":"arm on right","ref_id":31460},{"sent":"man on right","ref_id":31461},{"sent":"man","ref_id":31462},{"sent":"catcher","ref_id":31505},{"sent":"batter","ref_id":31506},{"sent":"girl on right","ref_id":31547},{"sent":"woman","ref_id":31548},{"sent":"chair on left","ref_id":31552},{"sent":"woman","ref_id":31553},{"sent":"girl","ref_id":31554},{"sent":"girl","ref_id":31555},{"sent":"left guy","ref_id":31572},{"sent":"right girl","ref_id":31573},{"sent":"right woman","ref_id":31597},{"sent":"man on right","ref_id":31598},{"sent":"woman in middle","ref_id":31599},{"sent":"woman in middle","ref_id":31600},{"sent":"man on left","ref_id":31601},{"sent":"player in white","ref_id":31762},{"sent":"red shirt right","ref_id":31763},{"sent":"blue shirt","ref_id":31764},{"sent":"guy in white shirt","ref_id":31765},{"sent":"person in black","ref_id":31799},{"sent":"person on right","ref_id":31800},{"sent":"man on right","ref_id":31816},{"sent":"man in suit","ref_id":31817},{"sent":"man on left","ref_id":31818},{"sent":"person on left","ref_id":31862},{"sent":"girl","ref_id":31863},{"sent":"person in black on right","ref_id":31866},{"sent":"yellow shirt","ref_id":31867},{"sent":"man","ref_id":31955},{"sent":"girl","ref_id":31956},{"sent":"left girl","ref_id":32164},{"sent":"boy on right","ref_id":32217},{"sent":"boy in blue","ref_id":32218},{"sent":"man on right","ref_id":32234},{"sent":"man","ref_id":32235},{"sent":"baby","ref_id":32298},{"sent":"man on left","ref_id":32299},{"sent":"black suitcase","ref_id":32432},{"sent":"black bag","ref_id":32433},{"sent":"catcher","ref_id":32508},{"sent":"batter","ref_id":32509},{"sent":"woman on left","ref_id":32582},{"sent":"girl in white","ref_id":32583},{"sent":"batter","ref_id":32584},{"sent":"umpire","ref_id":32585},{"sent":"woman","ref_id":32644},{"sent":"girl","ref_id":32645},{"sent":"right sheep","ref_id":32646},{"sent":"left sheep","ref_id":32647},{"sent":"right guy","ref_id":32804},{"sent":"man in middle","ref_id":32805},{"sent":"man on left","ref_id":32806},{"sent":"man in middle","ref_id":32807},{"sent":"man in white","ref_id":32850},{"sent":"guy in white shirt","ref_id":32851},{"sent":"man in white shirt","ref_id":33044},{"sent":"man in blue shirt","ref_id":33045},{"sent":"woman in white","ref_id":33046},{"sent":"man on right","ref_id":33056},{"sent":"man in middle","ref_id":33057},{"sent":"man","ref_id":33058},{"sent":"man","ref_id":33097},{"sent":"man on right","ref_id":33098},{"sent":"bottom right corner","ref_id":33327},{"sent":"man in black shirt","ref_id":33328},{"sent":"catcher","ref_id":33462},{"sent":"batter","ref_id":33463},{"sent":"man on right","ref_id":33599},{"sent":"man","ref_id":33600},{"sent":"right person","ref_id":33622},{"sent":"man on right","ref_id":33623},{"sent":"girl in middle","ref_id":33624},{"sent":"girl on left","ref_id":33625},{"sent":"guy on right","ref_id":33631},{"sent":"left person","ref_id":33632},{"sent":"catcher","ref_id":33633},{"sent":"batter","ref_id":33634},{"sent":"donut in middle","ref_id":33696},{"sent":"hand","ref_id":33697},{"sent":"white shirt","ref_id":33698},{"sent":"man in black shirt","ref_id":33819},{"sent":"woman on right","ref_id":33820},{"sent":"woman in black","ref_id":33821},{"sent":"man in black","ref_id":33822},{"sent":"blue suitcase","ref_id":33922},{"sent":"man in black shirt","ref_id":33923},{"sent":"the seat behind the man","ref_id":33924},{"sent":"woman in black","ref_id":33925},{"sent":"woman in front","ref_id":33926},{"sent":"girl","ref_id":33990},{"sent":"right flower","ref_id":33991},{"sent":"right bottom corner","ref_id":34093},{"sent":"woman in pink","ref_id":34094},{"sent":"woman in black","ref_id":34095},{"sent":"man on left","ref_id":34221},{"sent":"man in white","ref_id":34222},{"sent":"woman in white","ref_id":34389},{"sent":"man in white shirt","ref_id":34390},{"sent":"left girl","ref_id":34391},{"sent":"girl in middle","ref_id":34443},{"sent":"girl in white","ref_id":34444},{"sent":"woman on right","ref_id":34445},{"sent":"woman on left","ref_id":34463},{"sent":"white car","ref_id":34464},{"sent":"man in white","ref_id":34478},{"sent":"man on left","ref_id":34479},{"sent":"man in blue","ref_id":34480},{"sent":"man on right","ref_id":34481},{"sent":"right guy","ref_id":34655},{"sent":"woman","ref_id":34656},{"sent":"person on left","ref_id":34659},{"sent":"girl in front","ref_id":34660},{"sent":"girl on right","ref_id":34661},{"sent":"person in white shirt","ref_id":34708},{"sent":"person in white shirt","ref_id":34709},{"sent":"person in white shirt","ref_id":34710},{"sent":"person in white shirt","ref_id":34711},{"sent":"woman in pink","ref_id":34712},{"sent":"woman in front","ref_id":34713},{"sent":"second from right","ref_id":34716},{"sent":"man on right","ref_id":34717},{"sent":"right person","ref_id":34718},{"sent":"left person","ref_id":34719},{"sent":"right woman","ref_id":34743},{"sent":"woman","ref_id":34744},{"sent":"man on right","ref_id":34745},{"sent":"batter","ref_id":34879},{"sent":"catcher","ref_id":34880},{"sent":"left kid","ref_id":35066},{"sent":"girl on right","ref_id":35067},{"sent":"kid on right","ref_id":35100},{"sent":"kid","ref_id":35101},{"sent":"blue","ref_id":35172},{"sent":"man in front with white shirt","ref_id":35206},{"sent":"second from left","ref_id":35207},{"sent":"green shirt","ref_id":35208},{"sent":"woman in green","ref_id":35209},{"sent":"girl","ref_id":35268},{"sent":"woman","ref_id":35269},{"sent":"bride","ref_id":35305},{"sent":"groom","ref_id":35306},{"sent":"man","ref_id":35319},{"sent":"woman in red","ref_id":35320},{"sent":"man in white","ref_id":35331},{"sent":"man in white shirt","ref_id":35332},{"sent":"guy on right","ref_id":35333},{"sent":"left person","ref_id":35407},{"sent":"woman on right","ref_id":35408},{"sent":"woman","ref_id":35409},{"sent":"umbrella on right","ref_id":35410},{"sent":"top left corner","ref_id":35411},{"sent":"girl in blue","ref_id":35615},{"sent":"woman in white","ref_id":35616},{"sent":"left dog","ref_id":35654},{"sent":"bottom left corner","ref_id":35655},{"sent":"left person","ref_id":35656},{"sent":"right dog","ref_id":35657},{"sent":"man in white shirt","ref_id":35710},{"sent":"woman in white","ref_id":35711},{"sent":"girl","ref_id":35739},{"sent":"guy on left","ref_id":35740},{"sent":"girl","ref_id":35786},{"sent":"girl","ref_id":35787},{"sent":"left guy","ref_id":35794},{"sent":"right girl","ref_id":35795},{"sent":"woman in black","ref_id":35837},{"sent":"left guy","ref_id":35848},{"sent":"right person","ref_id":35849},{"sent":"boy in middle","ref_id":35850},{"sent":"right girl","ref_id":35851},{"sent":"woman in red","ref_id":35965},{"sent":"woman in red","ref_id":35966},{"sent":"man in front","ref_id":35970},{"sent":"player in white","ref_id":35975},{"sent":"guy in blue","ref_id":35976},{"sent":"woman","ref_id":36051},{"sent":"baby","ref_id":36052},{"sent":"batter","ref_id":36160},{"sent":"catcher","ref_id":36161},{"sent":"umpire","ref_id":36162},{"sent":"kid","ref_id":36426},{"sent":"baby","ref_id":36427},{"sent":"woman","ref_id":36545},{"sent":"woman","ref_id":36546},{"sent":"woman on left","ref_id":36691},{"sent":"man on right","ref_id":36692},{"sent":"woman in black","ref_id":36693},{"sent":"man in blue shirt","ref_id":36694},{"sent":"woman on right","ref_id":36779},{"sent":"man","ref_id":36780},{"sent":"woman in back","ref_id":36880},{"sent":"woman in white","ref_id":36881},{"sent":"man on right","ref_id":36911},{"sent":"left person","ref_id":36912},{"sent":"man on left","ref_id":36913},{"sent":"left person","ref_id":36928},{"sent":"girl","ref_id":36929},{"sent":"girl","ref_id":36930},{"sent":"girl","ref_id":36931},{"sent":"batter","ref_id":36993},{"sent":"batter","ref_id":36994},{"sent":"right person","ref_id":36999},{"sent":"left guy","ref_id":37000},{"sent":"umpire","ref_id":37032},{"sent":"batter","ref_id":37033},{"sent":"woman","ref_id":37066},{"sent":"man","ref_id":37067},{"sent":"man on right","ref_id":37125},{"sent":"man","ref_id":37126},{"sent":"red shirt","ref_id":37250},{"sent":"girl","ref_id":37251},{"sent":"girl in blue","ref_id":37286},{"sent":"right guy","ref_id":37287},{"sent":"man on left","ref_id":37288},{"sent":"person in background","ref_id":37431},{"sent":"man","ref_id":37432},{"sent":"left hand","ref_id":37472},{"sent":"hand on right","ref_id":37473},{"sent":"man on right","ref_id":37478},{"sent":"woman","ref_id":37479},{"sent":"man in black shirt","ref_id":37598},{"sent":"man in white shirt","ref_id":37599},{"sent":"man in black shirt","ref_id":37600},{"sent":"kid","ref_id":37756},{"sent":"girl","ref_id":37757},{"sent":"baby","ref_id":37800},{"sent":"woman","ref_id":37801},{"sent":"woman on left","ref_id":37815},{"sent":"man in white shirt","ref_id":37816},{"sent":"man in white shirt","ref_id":37883},{"sent":"white shirt","ref_id":37884},{"sent":"man on right","ref_id":37974},{"sent":"woman","ref_id":37975},{"sent":"left girl","ref_id":38214},{"sent":"man on right","ref_id":38215},{"sent":"right UNK","ref_id":38216},{"sent":"man in black shirt","ref_id":38227},{"sent":"woman in white","ref_id":38228},{"sent":"left guy","ref_id":38274},{"sent":"man in white","ref_id":38275},{"sent":"person on right","ref_id":38276},{"sent":"man in white shirt","ref_id":38340},{"sent":"man in white shirt","ref_id":38341},{"sent":"man on right","ref_id":38390},{"sent":"left woman","ref_id":38391},{"sent":"right guy","ref_id":38392},{"sent":"right guy","ref_id":38417},{"sent":"guy in white","ref_id":38418},{"sent":"guy in white","ref_id":38419},{"sent":"man on right","ref_id":38446},{"sent":"second from right","ref_id":38447},{"sent":"man in middle","ref_id":38448},{"sent":"second from left","ref_id":38449},{"sent":"man on left","ref_id":38504},{"sent":"man on right","ref_id":38505},{"sent":"woman","ref_id":38506},{"sent":"right girl","ref_id":38544},{"sent":"woman on left","ref_id":38545},{"sent":"chair on left","ref_id":38546},{"sent":"chair on right","ref_id":38547},{"sent":"right person","ref_id":38587},{"sent":"man in middle","ref_id":38588},{"sent":"person in middle","ref_id":38589},{"sent":"man in white shirt","ref_id":38650},{"sent":"woman in black","ref_id":38651},{"sent":"left guy","ref_id":38654},{"sent":"man in red shirt","ref_id":38655},{"sent":"woman in black","ref_id":38656},{"sent":"red shirt","ref_id":38742},{"sent":"man in white shirt","ref_id":38743},{"sent":"woman in black","ref_id":38744},{"sent":"man","ref_id":38815},{"sent":"man","ref_id":38816},{"sent":"person on bike","ref_id":38856},{"sent":"person in red","ref_id":38857},{"sent":"left person","ref_id":38938},{"sent":"woman in front","ref_id":38939},{"sent":"white couch","ref_id":39139},{"sent":"woman in red","ref_id":39140},{"sent":"man in white","ref_id":39141},{"sent":"man in white shirt","ref_id":39142},{"sent":"man on right","ref_id":39180},{"sent":"man on right","ref_id":39181},{"sent":"man in white","ref_id":39182},{"sent":"man in white","ref_id":39183},{"sent":"man on right","ref_id":39298},{"sent":"man on left","ref_id":39299},{"sent":"left person","ref_id":39406},{"sent":"man","ref_id":39407},{"sent":"black shirt","ref_id":39550},{"sent":"laptop on right","ref_id":39551},{"sent":"black laptop","ref_id":39552},{"sent":"blue shirt","ref_id":39553},{"sent":"left laptop","ref_id":39554},{"sent":"left hand","ref_id":39555},{"sent":"guy in white","ref_id":39593},{"sent":"guy in red shirt","ref_id":39594},{"sent":"guy in white shirt","ref_id":39595},{"sent":"white shirt","ref_id":39596},{"sent":"kid in red","ref_id":39597},{"sent":"guy in middle","ref_id":39598},{"sent":"kid in blue","ref_id":39599},{"sent":"girl on right","ref_id":39600},{"sent":"man on left","ref_id":39635},{"sent":"man","ref_id":39636},{"sent":"person on right","ref_id":39644},{"sent":"woman","ref_id":39645},{"sent":"white car on right","ref_id":39646},{"sent":"baby","ref_id":39755},{"sent":"baby","ref_id":39756},{"sent":"man in black","ref_id":39839},{"sent":"kid in white","ref_id":39875},{"sent":"woman","ref_id":39876},{"sent":"baby","ref_id":39877},{"sent":"right person","ref_id":39909},{"sent":"person on right","ref_id":39910},{"sent":"woman in front","ref_id":39929},{"sent":"woman on right","ref_id":39930},{"sent":"woman in pink","ref_id":39931},{"sent":"girl in back","ref_id":40011},{"sent":"boy","ref_id":40012},{"sent":"person in black shirt on right","ref_id":40013},{"sent":"player on right","ref_id":40122},{"sent":"catcher","ref_id":40123},{"sent":"man in white","ref_id":40174},{"sent":"woman on left","ref_id":40175},{"sent":"woman in black","ref_id":40313},{"sent":"person on right","ref_id":40314},{"sent":"man on right","ref_id":40315},{"sent":"woman on left","ref_id":40316},{"sent":"woman in black","ref_id":40348},{"sent":"woman on left","ref_id":40349},{"sent":"man in white shirt","ref_id":40374},{"sent":"white hat","ref_id":40375},{"sent":"woman on left","ref_id":40376},{"sent":"player in white","ref_id":40390},{"sent":"man in white","ref_id":40391},{"sent":"right hot dog","ref_id":40496},{"sent":"woman in back","ref_id":40497},{"sent":"right pizza","ref_id":40498},{"sent":"girl in pink","ref_id":40499},{"sent":"man in white shirt","ref_id":40567},{"sent":"left person","ref_id":40568},{"sent":"girl on left","ref_id":40601},{"sent":"person in front","ref_id":40602},{"sent":"woman on right","ref_id":40603},{"sent":"umpire","ref_id":40633},{"sent":"batter","ref_id":40634},{"sent":"batter","ref_id":40635},{"sent":"car on right","ref_id":40695},{"sent":"right guy","ref_id":40696},{"sent":"kid in blue","ref_id":40725},{"sent":"catcher","ref_id":40726},{"sent":"man in middle","ref_id":40808},{"sent":"woman in black","ref_id":40809},{"sent":"woman on left","ref_id":40856},{"sent":"woman on right","ref_id":40857},{"sent":"woman","ref_id":40864},{"sent":"woman in white","ref_id":40865},{"sent":"woman","ref_id":40866},{"sent":"black area above the UNK","ref_id":40878},{"sent":"the man in the middle","ref_id":40879},{"sent":"man on right","ref_id":40880},{"sent":"white shirt","ref_id":40881},{"sent":"man in white","ref_id":41067},{"sent":"man in front","ref_id":41068},{"sent":"girl","ref_id":41212},{"sent":"banana","ref_id":41213},{"sent":"woman","ref_id":41214},{"sent":"woman","ref_id":41215},{"sent":"man on left","ref_id":41296},{"sent":"man","ref_id":41297},{"sent":"man","ref_id":41457},{"sent":"woman","ref_id":41458},{"sent":"woman","ref_id":41478},{"sent":"girl","ref_id":41479},{"sent":"man in red hat","ref_id":41679},{"sent":"man in white shirt","ref_id":41680},{"sent":"woman in blue shirt","ref_id":41681},{"sent":"left elephant","ref_id":41705},{"sent":"elephant on right","ref_id":41706},{"sent":"elephant in back","ref_id":41707},{"sent":"baby","ref_id":41708},{"sent":"baby","ref_id":41709},{"sent":"right person","ref_id":41812},{"sent":"woman in white","ref_id":41889},{"sent":"woman in white","ref_id":41890},{"sent":"person on left","ref_id":41899},{"sent":"man in white","ref_id":41900},{"sent":"woman on right","ref_id":42074},{"sent":"bottle on left","ref_id":42075},{"sent":"right bottle","ref_id":42076},{"sent":"bottle on left","ref_id":42077},{"sent":"man in blue","ref_id":42078},{"sent":"man on left","ref_id":42079},{"sent":"kid in red","ref_id":42080},{"sent":"person in white","ref_id":42189},{"sent":"kid","ref_id":42190},{"sent":"man in white","ref_id":42208},{"sent":"man in white shirt","ref_id":42209},{"sent":"guy in white shirt","ref_id":42257},{"sent":"guy in white shirt","ref_id":42258},{"sent":"guy on right","ref_id":42369},{"sent":"man in black shirt","ref_id":42370},{"sent":"woman","ref_id":42635},{"sent":"man","ref_id":42636},{"sent":"right guy","ref_id":42678},{"sent":"woman","ref_id":42679},{"sent":"right woman","ref_id":42896},{"sent":"person in black under umbrella","ref_id":42897},{"sent":"man on right","ref_id":43003},{"sent":"bottom left table","ref_id":43004},{"sent":"man in white","ref_id":43005},{"sent":"girl","ref_id":43088},{"sent":"woman","ref_id":43089},{"sent":"left guy","ref_id":43150},{"sent":"left person","ref_id":43151},{"sent":"right guy","ref_id":43152},{"sent":"person on right","ref_id":43153},{"sent":"right guy","ref_id":43162},{"sent":"bike on right","ref_id":43175},{"sent":"right blue","ref_id":43176},{"sent":"man on bike","ref_id":43177},{"sent":"front guy","ref_id":43178},{"sent":"bike on left","ref_id":43179},{"sent":"second bike from left","ref_id":43180},{"sent":"bike on right","ref_id":43181},{"sent":"baby","ref_id":43261},{"sent":"left person","ref_id":43262},{"sent":"baby","ref_id":43263},{"sent":"baby","ref_id":43264},{"sent":"table in front","ref_id":43288},{"sent":"woman in middle","ref_id":43289},{"sent":"girl on right","ref_id":43290},{"sent":"woman on left","ref_id":43291},{"sent":"middle chair","ref_id":43292},{"sent":"girl on right","ref_id":43298},{"sent":"woman","ref_id":43299},{"sent":"man in black","ref_id":43300},{"sent":"person in black","ref_id":43311},{"sent":"person in black","ref_id":43312},{"sent":"left girl","ref_id":43313},{"sent":"right girl","ref_id":43314},{"sent":"top left corner","ref_id":43315},{"sent":"person on left","ref_id":43316},{"sent":"right glass","ref_id":43317},{"sent":"glass on right","ref_id":43318},{"sent":"person in back","ref_id":43319},{"sent":"right glass","ref_id":43320},{"sent":"blue car","ref_id":43341},{"sent":"red shirt","ref_id":43342},{"sent":"person on right","ref_id":43343},{"sent":"white car","ref_id":43344},{"sent":"woman","ref_id":43379},{"sent":"man","ref_id":43380},{"sent":"man in white","ref_id":43535},{"sent":"person on left","ref_id":43536},{"sent":"woman in black","ref_id":43537},{"sent":"person on left","ref_id":43538},{"sent":"man in front","ref_id":43539},{"sent":"left bear","ref_id":43647},{"sent":"woman","ref_id":43648},{"sent":"man in blue","ref_id":43944},{"sent":"man in middle","ref_id":43945},{"sent":"person on right","ref_id":43948},{"sent":"woman","ref_id":43949},{"sent":"red shirt","ref_id":43996},{"sent":"guy in blue","ref_id":44027},{"sent":"tennis player","ref_id":44028},{"sent":"right guy","ref_id":44050},{"sent":"man in black","ref_id":44051},{"sent":"kid in red","ref_id":44052},{"sent":"kid in red","ref_id":44357},{"sent":"man in white shirt","ref_id":44358},{"sent":"person in background","ref_id":44417},{"sent":"woman","ref_id":44418},{"sent":"woman","ref_id":44448},{"sent":"man","ref_id":44449},{"sent":"man on left","ref_id":44517},{"sent":"girl","ref_id":44518},{"sent":"woman","ref_id":44519},{"sent":"woman in purple","ref_id":44553},{"sent":"right person","ref_id":44554},{"sent":"red shirt","ref_id":44582},{"sent":"girl in blue shirt","ref_id":44583},{"sent":"guy in black shirt","ref_id":44584},{"sent":"woman in white","ref_id":44628},{"sent":"man on left","ref_id":44629},{"sent":"woman in white","ref_id":44630},{"sent":"girl in middle","ref_id":44633},{"sent":"man on left","ref_id":44634},{"sent":"woman in back","ref_id":44635},{"sent":"left person","ref_id":44642},{"sent":"woman","ref_id":44643},{"sent":"guy in red","ref_id":44644},{"sent":"person on left","ref_id":44645},{"sent":"guy in white","ref_id":44699},{"sent":"right hand","ref_id":44700},{"sent":"man on right","ref_id":44714},{"sent":"woman on left","ref_id":44715},{"sent":"man in black","ref_id":44732},{"sent":"man in red","ref_id":44733},{"sent":"the woman","ref_id":44740},{"sent":"hand","ref_id":44741},{"sent":"man","ref_id":44766},{"sent":"batter","ref_id":45019},{"sent":"catcher","ref_id":45020},{"sent":"red shirt","ref_id":45021},{"sent":"girl in white shirt","ref_id":45046},{"sent":"person on right","ref_id":45047},{"sent":"woman in white","ref_id":45048},{"sent":"left girl","ref_id":45049},{"sent":"guy on left","ref_id":45050},{"sent":"girl in white","ref_id":45051},{"sent":"girl in white","ref_id":45052},{"sent":"catcher","ref_id":45300},{"sent":"batter","ref_id":45301},{"sent":"guy on right","ref_id":45340},{"sent":"woman on left","ref_id":45341},{"sent":"woman in black dress","ref_id":45342},{"sent":"person on right","ref_id":45358},{"sent":"person in middle","ref_id":45359},{"sent":"man on right","ref_id":45367},{"sent":"woman in black","ref_id":45368},{"sent":"woman on right","ref_id":45369},{"sent":"person on left","ref_id":45407},{"sent":"woman in white","ref_id":45434},{"sent":"man in red","ref_id":45435},{"sent":"woman in black","ref_id":45436},{"sent":"guy on bike","ref_id":45600},{"sent":"man","ref_id":45601},{"sent":"man on left","ref_id":45675},{"sent":"man on left","ref_id":45676},{"sent":"man in black","ref_id":45677},{"sent":"woman on left","ref_id":45837},{"sent":"man on left","ref_id":45838},{"sent":"girl on right","ref_id":45839},{"sent":"white shirt","ref_id":45840},{"sent":"woman in red","ref_id":45841},{"sent":"right guy","ref_id":45863},{"sent":"left guy","ref_id":45864},{"sent":"guy in black","ref_id":45865},{"sent":"woman","ref_id":45966},{"sent":"woman","ref_id":45967},{"sent":"catcher","ref_id":46080},{"sent":"batter","ref_id":46081},{"sent":"umpire","ref_id":46165},{"sent":"catcher","ref_id":46166},{"sent":"woman in front","ref_id":46208},{"sent":"woman in front","ref_id":46209},{"sent":"person under umbrella","ref_id":46210},{"sent":"person on left","ref_id":46211},{"sent":"man in white","ref_id":46285},{"sent":"man in red hat","ref_id":46286},{"sent":"man in blue shirt","ref_id":46321},{"sent":"man on right","ref_id":46322},{"sent":"girl in white shirt on right","ref_id":46350},{"sent":"woman in white on right","ref_id":46351},{"sent":"woman in front with hat","ref_id":46352},{"sent":"woman in red","ref_id":46353},{"sent":"woman","ref_id":46393},{"sent":"right arm","ref_id":46394},{"sent":"top right corner","ref_id":46403},{"sent":"person in back","ref_id":46404},{"sent":"kid","ref_id":46405},{"sent":"person in background","ref_id":46451},{"sent":"person on right","ref_id":46452},{"sent":"woman","ref_id":46453},{"sent":"girl on right","ref_id":46555},{"sent":"arm on left","ref_id":46556},{"sent":"man in black shirt","ref_id":46581},{"sent":"man","ref_id":46582},{"sent":"person on left","ref_id":46672},{"sent":"man in black","ref_id":46673},{"sent":"man in middle with glasses","ref_id":46678},{"sent":"woman in white","ref_id":46679},{"sent":"bottom right corner","ref_id":46680},{"sent":"woman on right with black hair","ref_id":46681},{"sent":"woman in front with black hair","ref_id":46682},{"sent":"man in front with glasses","ref_id":46683},{"sent":"man on left","ref_id":46823},{"sent":"woman","ref_id":46824},{"sent":"girl in blue shirt","ref_id":46834},{"sent":"girl in black","ref_id":46835},{"sent":"person on right","ref_id":46836},{"sent":"girl in white shirt","ref_id":46837},{"sent":"groom","ref_id":46838},{"sent":"bride","ref_id":46839},{"sent":"right guy","ref_id":46880},{"sent":"man in white","ref_id":46881},{"sent":"woman on right","ref_id":46938},{"sent":"man on left","ref_id":46939},{"sent":"bottom right bowl","ref_id":46940},{"sent":"glass on left","ref_id":46941},{"sent":"left kid","ref_id":46949},{"sent":"man on right","ref_id":46950},{"sent":"woman in black","ref_id":46951},{"sent":"right person","ref_id":47014},{"sent":"left person","ref_id":47015},{"sent":"man on right","ref_id":47077},{"sent":"man on right","ref_id":47092},{"sent":"man in white shirt","ref_id":47093},{"sent":"man in black","ref_id":47094},{"sent":"guy in white shirt","ref_id":47164},{"sent":"man in white","ref_id":47165},{"sent":"woman in black","ref_id":47319},{"sent":"man in white","ref_id":47391},{"sent":"bottom right corner","ref_id":47392},{"sent":"bottom left person","ref_id":47393},{"sent":"man in black","ref_id":47394},{"sent":"man in middle","ref_id":47441},{"sent":"man in black","ref_id":47442},{"sent":"right guy","ref_id":47443},{"sent":"right guy","ref_id":47446},{"sent":"guy in black","ref_id":47447},{"sent":"person in front","ref_id":47519},{"sent":"woman in front","ref_id":47520},{"sent":"person on right","ref_id":47521},{"sent":"person on left","ref_id":47522},{"sent":"white car","ref_id":47566},{"sent":"woman","ref_id":47567},{"sent":"white shirt","ref_id":47568},{"sent":"right bus","ref_id":47569},{"sent":"woman in black","ref_id":47682},{"sent":"man in suit on right","ref_id":47683},{"sent":"woman in black","ref_id":47684},{"sent":"white shirt left","ref_id":47685},{"sent":"woman in black on left","ref_id":47686},{"sent":"man in black suit","ref_id":47687},{"sent":"man on right","ref_id":47757},{"sent":"woman","ref_id":47758},{"sent":"person on right","ref_id":47777},{"sent":"person on left","ref_id":47778},{"sent":"woman in white","ref_id":47779},{"sent":"woman in red","ref_id":47780},{"sent":"right player","ref_id":47885},{"sent":"left girl","ref_id":47886},{"sent":"right girl","ref_id":47887},{"sent":"left player","ref_id":47888},{"sent":"person in middle","ref_id":47999},{"sent":"person in front","ref_id":48000},{"sent":"right person","ref_id":48001},{"sent":"person in background on left","ref_id":48006},{"sent":"batter","ref_id":48007},{"sent":"man in black","ref_id":48132},{"sent":"woman in blue","ref_id":48133},{"sent":"pizza on right","ref_id":48134},{"sent":"pizza","ref_id":48135},{"sent":"girl","ref_id":48147},{"sent":"girl on right","ref_id":48148},{"sent":"catcher","ref_id":48151},{"sent":"umpire","ref_id":48152},{"sent":"batter","ref_id":48153},{"sent":"man in black shirt","ref_id":48201},{"sent":"man","ref_id":48202},{"sent":"person in front","ref_id":48304},{"sent":"person in middle","ref_id":48305},{"sent":"player in white","ref_id":48322},{"sent":"player","ref_id":48323},{"sent":"man","ref_id":48363},{"sent":"man in black","ref_id":48364},{"sent":"woman","ref_id":48365},{"sent":"man on left","ref_id":48374},{"sent":"right elephant","ref_id":48375},{"sent":"right guy","ref_id":48473},{"sent":"man in middle","ref_id":48474},{"sent":"black umbrella","ref_id":48475},{"sent":"right horse","ref_id":48476},{"sent":"man on left","ref_id":48477},{"sent":"man on left","ref_id":48478},{"sent":"batter","ref_id":48571},{"sent":"catcher","ref_id":48572},{"sent":"girl in blue","ref_id":48610},{"sent":"girl in white","ref_id":48611},{"sent":"man in white shirt","ref_id":48705},{"sent":"woman on right","ref_id":48706},{"sent":"woman in black","ref_id":48707},{"sent":"left player","ref_id":48745},{"sent":"right player","ref_id":48746},{"sent":"man","ref_id":48790},{"sent":"man on right","ref_id":48791},{"sent":"woman","ref_id":48983},{"sent":"man on right","ref_id":48984},{"sent":"man in black","ref_id":49014},{"sent":"guy in background","ref_id":49015},{"sent":"batter","ref_id":49016},{"sent":"player in black","ref_id":49199},{"sent":"player in red","ref_id":49200},{"sent":"player in red","ref_id":49201},{"sent":"man","ref_id":49312},{"sent":"man","ref_id":49313},{"sent":"person on left","ref_id":49315},{"sent":"person in middle","ref_id":49316},{"sent":"head on left","ref_id":49373},{"sent":"person on right","ref_id":49374},{"sent":"man in white","ref_id":49375},{"sent":"woman","ref_id":49457},{"sent":"man","ref_id":49458},{"sent":"woman","ref_id":49538},{"sent":"woman","ref_id":49539},{"sent":"right leg","ref_id":49600},{"sent":"left leg","ref_id":49601},{"sent":"person on right","ref_id":49606},{"sent":"batter","ref_id":49607},{"sent":"person on right","ref_id":49620},{"sent":"person on left","ref_id":49621},{"sent":"man","ref_id":49622},{"sent":"man in black shirt","ref_id":49638},{"sent":"man in blue","ref_id":49639},{"sent":"woman on right","ref_id":49747},{"sent":"woman","ref_id":49748},{"sent":"woman in white","ref_id":49833},{"sent":"woman in black","ref_id":49834},{"sent":"guy in black shirt","ref_id":49932},{"sent":"white shirt","ref_id":49933},{"sent":"man in black","ref_id":47},{"sent":"person on right","ref_id":109},{"sent":"woman in red","ref_id":110},{"sent":"car behind bike","ref_id":111},{"sent":"car on left","ref_id":112},{"sent":"man in blue","ref_id":382},{"sent":"man in white","ref_id":383},{"sent":"left person","ref_id":519},{"sent":"man on right","ref_id":520}]}
================================================
FILE: refer/test/sample_expressions_testB.json
================================================
{"predictions":[{"sent":"car on left","ref_id":25},{"sent":"car on left","ref_id":26},{"sent":"top sandwich","ref_id":27},{"sent":"top left donut","ref_id":28},{"sent":"zebra on left","ref_id":45},{"sent":"right zebra","ref_id":46},{"sent":"chair in front of man","ref_id":164},{"sent":"bottom right corner","ref_id":165},{"sent":"left chair","ref_id":166},{"sent":"top right corner","ref_id":232},{"sent":"pizza in front","ref_id":233},{"sent":"glass in back","ref_id":234},{"sent":"left glass","ref_id":235},{"sent":"yellow fruit on left","ref_id":259},{"sent":"apple in front","ref_id":260},{"sent":"yellow apple","ref_id":261},{"sent":"orange in the middle","ref_id":262},{"sent":"bottom right orange","ref_id":285},{"sent":"bottom left apple","ref_id":286},{"sent":"bottom right green apple","ref_id":287},{"sent":"second row from bottom right","ref_id":288},{"sent":"white bear","ref_id":299},{"sent":"brown bear","ref_id":300},{"sent":"red vase","ref_id":326},{"sent":"red vase","ref_id":327},{"sent":"vase","ref_id":328},{"sent":"glass on left","ref_id":360},{"sent":"glass of beer","ref_id":361},{"sent":"bottle on right","ref_id":362},{"sent":"bottle on left","ref_id":363},{"sent":"bottle of wine bottle on left","ref_id":364},{"sent":"right horse","ref_id":435},{"sent":"left horse","ref_id":436},{"sent":"train on right","ref_id":474},{"sent":"train on left","ref_id":475},{"sent":"right elephant","ref_id":545},{"sent":"boat on right","ref_id":605},{"sent":"white car","ref_id":629},{"sent":"white car","ref_id":630},{"sent":"left bed","ref_id":668},{"sent":"bed","ref_id":669},{"sent":"right bike","ref_id":677},{"sent":"left bike","ref_id":678},{"sent":"left table","ref_id":721},{"sent":"table","ref_id":722},{"sent":"traffic light","ref_id":837},{"sent":"traffic light","ref_id":838},{"sent":"front bike","ref_id":855},{"sent":"front bike","ref_id":856},{"sent":"blue tie","ref_id":923},{"sent":"left tie","ref_id":924},{"sent":"right tie","ref_id":925},{"sent":"red tie","ref_id":926},{"sent":"monitor on right","ref_id":940},{"sent":"monitor on right","ref_id":941},{"sent":"pizza in front","ref_id":1155},{"sent":"pizza slice","ref_id":1156},{"sent":"bottom left bananas","ref_id":1218},{"sent":"top left bananas","ref_id":1219},{"sent":"pizza in front","ref_id":1227},{"sent":"pizza in front","ref_id":1228},{"sent":"baby elephant","ref_id":1256},{"sent":"big elephant","ref_id":1257},{"sent":"right orange","ref_id":1273},{"sent":"top orange","ref_id":1274},{"sent":"horse on left","ref_id":1283},{"sent":"left fridge","ref_id":1339},{"sent":"fridge in front of the fridge","ref_id":1340},{"sent":"right cow","ref_id":1368},{"sent":"white truck","ref_id":1644},{"sent":"white car","ref_id":1645},{"sent":"broccoli in front","ref_id":1776},{"sent":"broccoli on right","ref_id":1777},{"sent":"second row from right","ref_id":1865},{"sent":"top middle sandwich","ref_id":1866},{"sent":"right most sandwich","ref_id":1867},{"sent":"left sandwich","ref_id":1868},{"sent":"left most sandwich","ref_id":1869},{"sent":"middle row second from right","ref_id":1870},{"sent":"second from right","ref_id":1871},{"sent":"elephant on right","ref_id":2033},{"sent":"elephant","ref_id":2034},{"sent":"second from left","ref_id":2103},{"sent":"right most yellow","ref_id":2104},{"sent":"second row from right","ref_id":2105},{"sent":"top right donut","ref_id":2122},{"sent":"bottom left donut","ref_id":2123},{"sent":"bottom left donut","ref_id":2124},{"sent":"middle donut","ref_id":2125},{"sent":"right donut","ref_id":2126},{"sent":"top right dessert","ref_id":2370},{"sent":"middle dessert","ref_id":2371},{"sent":"bowl","ref_id":2392},{"sent":"left bowl","ref_id":2393},{"sent":"bus on right","ref_id":2467},{"sent":"bus in front","ref_id":2468},{"sent":"left monitor","ref_id":2540},{"sent":"right monitor","ref_id":2541},{"sent":"blurry food in back","ref_id":2576},{"sent":"bottom right corner","ref_id":2577},{"sent":"glass in front of the woman","ref_id":2578},{"sent":"glass on left","ref_id":2579},{"sent":"plant on right","ref_id":2642},{"sent":"top right corner","ref_id":2643},{"sent":"green plant","ref_id":2644},{"sent":"bike on right","ref_id":2692},{"sent":"bike on the right","ref_id":2693},{"sent":"bike on left","ref_id":2694},{"sent":"bottom right red","ref_id":2738},{"sent":"bottom left UNK","ref_id":2739},{"sent":"left cat","ref_id":2857},{"sent":"cat on right","ref_id":2858},{"sent":"left elephant","ref_id":2937},{"sent":"right elephant","ref_id":2938},{"sent":"elephant on right","ref_id":2939},{"sent":"left person","ref_id":2944},{"sent":"person on left","ref_id":2945},{"sent":"top left hot dog","ref_id":2946},{"sent":"sandwich on left","ref_id":2947},{"sent":"front sandwich","ref_id":2948},{"sent":"top right sandwich","ref_id":2949},{"sent":"top sandwich","ref_id":2950},{"sent":"right","ref_id":2960},{"sent":"white and white","ref_id":2961},{"sent":"right train","ref_id":2962},{"sent":"right train","ref_id":2963},{"sent":"plant in the middle","ref_id":3028},{"sent":"right plant","ref_id":3029},{"sent":"left umbrella","ref_id":3125},{"sent":"umbrella","ref_id":3126},{"sent":"second from left","ref_id":3224},{"sent":"right box","ref_id":3225},{"sent":"left UNK","ref_id":3226},{"sent":"left horse","ref_id":3303},{"sent":"horse in front","ref_id":3304},{"sent":"bird on right","ref_id":3403},{"sent":"bird on left","ref_id":3404},{"sent":"table","ref_id":3656},{"sent":"left UNK","ref_id":3657},{"sent":"chair on right","ref_id":3844},{"sent":"top right bunk","ref_id":3845},{"sent":"left bear","ref_id":3875},{"sent":"right bear","ref_id":3876},{"sent":"top suitcase","ref_id":3910},{"sent":"right suitcase","ref_id":3911},{"sent":"bottom right suitcase","ref_id":3912},{"sent":"couch on left","ref_id":3919},{"sent":"right couch","ref_id":3920},{"sent":"left wine bottle","ref_id":3931},{"sent":"right bottle","ref_id":3932},{"sent":"top layer","ref_id":3941},{"sent":"front","ref_id":3942},{"sent":"bear on left","ref_id":3950},{"sent":"bear on right","ref_id":3951},{"sent":"bear on right","ref_id":3952},{"sent":"big bear","ref_id":3954},{"sent":"bottom left bear","ref_id":3955},{"sent":"bottom left suitcase","ref_id":4004},{"sent":"top right corner","ref_id":4005},{"sent":"left sandwich","ref_id":4021},{"sent":"sandwich on left","ref_id":4022},{"sent":"right bottle","ref_id":4072},{"sent":"second from left","ref_id":4073},{"sent":"UNK bottle","ref_id":4074},{"sent":"UNK","ref_id":4075},{"sent":"zebra on left","ref_id":4329},{"sent":"zebra in front","ref_id":4330},{"sent":"zebra on left","ref_id":4500},{"sent":"right zebra","ref_id":4501},{"sent":"glass on left","ref_id":4724},{"sent":"bus on right","ref_id":4806},{"sent":"bottom black","ref_id":4822},{"sent":"black suitcase on right","ref_id":4823},{"sent":"left chair","ref_id":4911},{"sent":"boat on left","ref_id":4912},{"sent":"top right corner","ref_id":4915},{"sent":"top left donut","ref_id":4916},{"sent":"top middle donut","ref_id":4917},{"sent":"bottom right donut","ref_id":4918},{"sent":"middle donut","ref_id":4919},{"sent":"top left apple","ref_id":4925},{"sent":"orange on right","ref_id":4926},{"sent":"orange in middle","ref_id":4927},{"sent":"middle apple","ref_id":4928},{"sent":"bottom right carrot","ref_id":4981},{"sent":"orange carrot","ref_id":4982},{"sent":"bottom right corner","ref_id":4987},{"sent":"car bottom left","ref_id":4988},{"sent":"top left corner","ref_id":5000},{"sent":"the cat","ref_id":5001},{"sent":"bottom right sheep","ref_id":5037},{"sent":"left sheep","ref_id":5038},{"sent":"sheep in front","ref_id":5039},{"sent":"right giraffe","ref_id":5040},{"sent":"right giraffe","ref_id":5041},{"sent":"right remote","ref_id":5072},{"sent":"left remote","ref_id":5073},{"sent":"white bowl of food","ref_id":5074},{"sent":"hot dog","ref_id":5075},{"sent":"top right slice","ref_id":5116},{"sent":"left sandwich","ref_id":5117},{"sent":"bottom left food","ref_id":5178},{"sent":"pizza on right","ref_id":5179},{"sent":"bowl of food on left","ref_id":5180},{"sent":"left bowl","ref_id":5181},{"sent":"left monitor","ref_id":5242},{"sent":"right monitor","ref_id":5243},{"sent":"right cow","ref_id":5298},{"sent":"cow on left","ref_id":5299},{"sent":"left horse","ref_id":5327},{"sent":"horse in front","ref_id":5328},{"sent":"horse on right","ref_id":5329},{"sent":"elephant in middle","ref_id":5340},{"sent":"left elephant","ref_id":5341},{"sent":"white car","ref_id":5491},{"sent":"white car","ref_id":5492},{"sent":"yellow car","ref_id":5493},{"sent":"top middle brown bear","ref_id":5521},{"sent":"bear on right","ref_id":5522},{"sent":"right bear","ref_id":5523},{"sent":"top right bear","ref_id":5524},{"sent":"top right bear","ref_id":5525},{"sent":"bear on left","ref_id":5526},{"sent":"bear in middle","ref_id":5527},{"sent":"toilet on left","ref_id":5645},{"sent":"chair in front","ref_id":5646},{"sent":"right sheep","ref_id":5669},{"sent":"left sheep","ref_id":5670},{"sent":"left chair","ref_id":5694},{"sent":"right bed","ref_id":5695},{"sent":"right train","ref_id":5797},{"sent":"right train","ref_id":5798},{"sent":"right slice","ref_id":5809},{"sent":"top left donut","ref_id":5810},{"sent":"white car","ref_id":5829},{"sent":"right white bus","ref_id":5830},{"sent":"white truck","ref_id":5831},{"sent":"donut on right","ref_id":5967},{"sent":"bottom right donut","ref_id":5968},{"sent":"right donut","ref_id":5969},{"sent":"donut on the right","ref_id":5970},{"sent":"left umbrella","ref_id":6053},{"sent":"middle banana","ref_id":6054},{"sent":"left bear","ref_id":6055},{"sent":"left UNK","ref_id":6056},{"sent":"left sheep","ref_id":6258},{"sent":"sheep in middle","ref_id":6259},{"sent":"right sheep","ref_id":6260},{"sent":"middle bowl","ref_id":6278},{"sent":"top left food","ref_id":6279},{"sent":"top left tray","ref_id":6280},{"sent":"top right bread","ref_id":6281},{"sent":"bottom left","ref_id":6282},{"sent":"right bread","ref_id":6283},{"sent":"bottom left bread","ref_id":6284},{"sent":"bottom left bowl","ref_id":6312},{"sent":"green apple","ref_id":6313},{"sent":"right clock","ref_id":6843},{"sent":"left clock","ref_id":6844},{"sent":"left bed","ref_id":6927},{"sent":"right bed","ref_id":6928},{"sent":"bottom left corner","ref_id":6929},{"sent":"bed","ref_id":6974},{"sent":"bed on right","ref_id":6975},{"sent":"top right broccoli","ref_id":7143},{"sent":"broccoli in front","ref_id":7144},{"sent":"broccoli on right","ref_id":7159},{"sent":"broccoli on left","ref_id":7160},{"sent":"UNK","ref_id":7183},{"sent":"UNK","ref_id":7184},{"sent":"left cow","ref_id":7252},{"sent":"right cow","ref_id":7253},{"sent":"bottom right corner","ref_id":7268},{"sent":"front bike","ref_id":7269},{"sent":"left suitcase","ref_id":7316},{"sent":"right suitcase","ref_id":7317},{"sent":"black suitcase","ref_id":7318},{"sent":"second suitcase from left","ref_id":7319},{"sent":"second bus from right","ref_id":7636},{"sent":"right bus","ref_id":7637},{"sent":"right duck","ref_id":7645},{"sent":"left duck","ref_id":7646},{"sent":"left duck","ref_id":7647},{"sent":"umbrella on left","ref_id":7801},{"sent":"right umbrella","ref_id":7802},{"sent":"broccoli on left","ref_id":7812},{"sent":"broccoli on the right","ref_id":7813},{"sent":"top bear","ref_id":7838},{"sent":"left bear","ref_id":7839},{"sent":"right bear","ref_id":7840},{"sent":"right horse","ref_id":7895},{"sent":"left dog","ref_id":7896},{"sent":"right slice","ref_id":7916},{"sent":"pizza slice","ref_id":7917},{"sent":"elephant on left","ref_id":8085},{"sent":"bear on left","ref_id":8128},{"sent":"left couch","ref_id":8210},{"sent":"right couch","ref_id":8211},{"sent":"right monitor","ref_id":8352},{"sent":"top left monitor","ref_id":8353},{"sent":"top left banana","ref_id":8380},{"sent":"banana in middle","ref_id":8381},{"sent":"banana on left","ref_id":8382},{"sent":"bowl of soup","ref_id":8649},{"sent":"bowl of food on left","ref_id":8650},{"sent":"white car","ref_id":8681},{"sent":"top left corner","ref_id":8729},{"sent":"glass","ref_id":8781},{"sent":"drink","ref_id":8782},{"sent":"left cow","ref_id":8788},{"sent":"left sheep","ref_id":8789},{"sent":"right sheep","ref_id":8790},{"sent":"top left sheep","ref_id":8806},{"sent":"sheep","ref_id":8807},{"sent":"top left donut","ref_id":8855},{"sent":"top left donut","ref_id":8856},{"sent":"donut on left","ref_id":8857},{"sent":"top right donut","ref_id":8858},{"sent":"front bike","ref_id":8909},{"sent":"left bike","ref_id":8910},{"sent":"right duck","ref_id":9214},{"sent":"left duck","ref_id":9215},{"sent":"top duck","ref_id":9216},{"sent":"right bus","ref_id":9233},{"sent":"bus in front","ref_id":9234},{"sent":"donut in front","ref_id":9295},{"sent":"donut on right","ref_id":9296},{"sent":"bottom right donut","ref_id":9305},{"sent":"bottom donut","ref_id":9306},{"sent":"donut in front","ref_id":9307},{"sent":"donut in front","ref_id":9308},{"sent":"left umbrella","ref_id":9423},{"sent":"left umbrella","ref_id":9424},{"sent":"top right umbrella","ref_id":9425},{"sent":"right umbrella","ref_id":9426},{"sent":"left plane","ref_id":9446},{"sent":"plane","ref_id":9447},{"sent":"right giraffe","ref_id":9560},{"sent":"left giraffe","ref_id":9561},{"sent":"hand","ref_id":9574},{"sent":"hand","ref_id":9575},{"sent":"bottom left apple","ref_id":9576},{"sent":"right giraffe","ref_id":9598},{"sent":"giraffe on left","ref_id":9599},{"sent":"black suitcase","ref_id":9628},{"sent":"elephant in front","ref_id":9629},{"sent":"elephant in front","ref_id":9630},{"sent":"elephant on right","ref_id":9631},{"sent":"right couch","ref_id":9707},{"sent":"couch","ref_id":9708},{"sent":"right horse","ref_id":9836},{"sent":"horse on left","ref_id":9837},{"sent":"horse on left","ref_id":9919},{"sent":"white horse","ref_id":9920},{"sent":"horse in front","ref_id":9921},{"sent":"bear on left","ref_id":10035},{"sent":"bear on left","ref_id":10036},{"sent":"right sandwich","ref_id":10110},{"sent":"left hot dog","ref_id":10111},{"sent":"elephant on right","ref_id":10239},{"sent":"elephant on right","ref_id":10240},{"sent":"elephant on left","ref_id":10241},{"sent":"left bike","ref_id":10380},{"sent":"motorcycle","ref_id":10381},{"sent":"white car","ref_id":10382},{"sent":"white car","ref_id":10383},{"sent":"right bird","ref_id":10601},{"sent":"left duck","ref_id":10602},{"sent":"red laptop","ref_id":10795},{"sent":"red and white UNK","ref_id":10796},{"sent":"cat on right","ref_id":10847},{"sent":"chair on left","ref_id":10907},{"sent":"chair in front of woman","ref_id":10908},{"sent":"orange on left","ref_id":11114},{"sent":"orange on top right","ref_id":11115},{"sent":"truck on right","ref_id":11131},{"sent":"truck on left","ref_id":11132},{"sent":"right piece of broccoli","ref_id":11192},{"sent":"right piece of food","ref_id":11193},{"sent":"glass on right","ref_id":11281},{"sent":"glass in front of wine glass","ref_id":11282},{"sent":"left couch","ref_id":11328},{"sent":"right black couch","ref_id":11329},{"sent":"left couch","ref_id":11330},{"sent":"right chair","ref_id":11331},{"sent":"bottom row second from left","ref_id":11982},{"sent":"bottom left donut","ref_id":11983},{"sent":"top row second from left","ref_id":11984},{"sent":"top row second from right","ref_id":11985},{"sent":"top right donut","ref_id":11986},{"sent":"top right donut","ref_id":11987},{"sent":"second row from right","ref_id":11988},{"sent":"top right donut","ref_id":11989},{"sent":"bottom left donut","ref_id":11990},{"sent":"middle row second from right","ref_id":11991},{"sent":"middle row second from right","ref_id":11992},{"sent":"bottom right donut","ref_id":11993},{"sent":"cow on right","ref_id":12041},{"sent":"cow on right","ref_id":12042},{"sent":"left cow","ref_id":12043},{"sent":"right bear","ref_id":12064},{"sent":"left bear","ref_id":12065},{"sent":"right bear","ref_id":12066},{"sent":"left bear","ref_id":12067},{"sent":"blue bike","ref_id":12106},{"sent":"front bike","ref_id":12107},{"sent":"middle row second from right","ref_id":12134},{"sent":"top left donut","ref_id":12135},{"sent":"middle row second from left","ref_id":12136},{"sent":"middle row second from left","ref_id":12137},{"sent":"bottom left donut","ref_id":12138},{"sent":"middle row","ref_id":12139},{"sent":"middle row second from right","ref_id":12140},{"sent":"right zebra","ref_id":12181},{"sent":"zebra on left","ref_id":12182},{"sent":"horse on right","ref_id":12239},{"sent":"horse on left","ref_id":12240},{"sent":"right","ref_id":12299},{"sent":"middle","ref_id":12300},{"sent":"the little girl","ref_id":12394},{"sent":"bottom right corner","ref_id":12395},{"sent":"left giraffe","ref_id":12407},{"sent":"right giraffe","ref_id":12408},{"sent":"left bus","ref_id":12421},{"sent":"right bus","ref_id":12422},{"sent":"right bus","ref_id":12423},{"sent":"left bike","ref_id":12464},{"sent":"bike on right","ref_id":12465},{"sent":"bike in front","ref_id":12466},{"sent":"bench","ref_id":12539},{"sent":"table","ref_id":12540},{"sent":"right boat","ref_id":12681},{"sent":"left boat","ref_id":12682},{"sent":"left cow","ref_id":12792},{"sent":"cow in front","ref_id":12793},{"sent":"right glass","ref_id":12899},{"sent":"left glass","ref_id":12900},{"sent":"second glass from left","ref_id":12901},{"sent":"middle glass","ref_id":12902},{"sent":"right giraffe","ref_id":12983},{"sent":"middle giraffe","ref_id":12984},{"sent":"right bus","ref_id":13072},{"sent":"left bus","ref_id":13073},{"sent":"top right bear","ref_id":13080},{"sent":"top bear","ref_id":13081},{"sent":"right chair","ref_id":13100},{"sent":"bottom right corner","ref_id":13101},{"sent":"cake in front","ref_id":13110},{"sent":"middle donut","ref_id":13111},{"sent":"bottom clock","ref_id":13188},{"sent":"clock on right","ref_id":13189},{"sent":"middle bear","ref_id":13298},{"sent":"left bear","ref_id":13299},{"sent":"bottom right dish","ref_id":13338},{"sent":"bottom left dish","ref_id":13339},{"sent":"bottom left bowl","ref_id":13340},{"sent":"bottom right bowl","ref_id":13341},{"sent":"bottom right bowl","ref_id":13342},{"sent":"top right bowl","ref_id":13343},{"sent":"top right container","ref_id":13344},{"sent":"bottom left bowl","ref_id":13345},{"sent":"white bird on right","ref_id":13370},{"sent":"middle duck","ref_id":13371},{"sent":"right bus","ref_id":13450},{"sent":"bus in front","ref_id":13451},{"sent":"motorcycle in front","ref_id":13459},{"sent":"motorcycle on left","ref_id":13460},{"sent":"white car on right","ref_id":13461},{"sent":"cow in front","ref_id":13504},{"sent":"right horse","ref_id":13505},{"sent":"cow on left","ref_id":13506},{"sent":"white car top left","ref_id":13511},{"sent":"blue bus","ref_id":13512},{"sent":"bus in middle","ref_id":13520},{"sent":"right bus","ref_id":13521},{"sent":"giraffe in front","ref_id":13658},{"sent":"giraffe on left","ref_id":13659},{"sent":"left monitor","ref_id":13708},{"sent":"couch on right","ref_id":13768},{"sent":"couch on left","ref_id":13769},{"sent":"left racket","ref_id":13819},{"sent":"right racket","ref_id":13820},{"sent":"hand on left","ref_id":13825},{"sent":"top left corner","ref_id":13826},{"sent":"top donut","ref_id":13827},{"sent":"left bus","ref_id":13830},{"sent":"bus on right","ref_id":13831},{"sent":"red car","ref_id":13832},{"sent":"chair on right","ref_id":13851},{"sent":"chair on left","ref_id":13852},{"sent":"banana on top","ref_id":13871},{"sent":"top banana","ref_id":13872},{"sent":"banana","ref_id":13873},{"sent":"top banana","ref_id":13874},{"sent":"bottom left sandwich","ref_id":14098},{"sent":"left sandwich","ref_id":14099},{"sent":"bananas in front","ref_id":14184},{"sent":"bananas","ref_id":14185},{"sent":"right zebra","ref_id":14283},{"sent":"zebra in the middle","ref_id":14284},{"sent":"top clock","ref_id":14470},{"sent":"bottom clock","ref_id":14471},{"sent":"right horse","ref_id":14509},{"sent":"left cow","ref_id":14510},{"sent":"red book","ref_id":14551},{"sent":"green book","ref_id":14552},{"sent":"zebra on right","ref_id":14652},{"sent":"zebra in front","ref_id":14653},{"sent":"right bird","ref_id":14665},{"sent":"left bird","ref_id":14666},{"sent":"orange","ref_id":14727},{"sent":"orange on right","ref_id":14728},{"sent":"orange on right","ref_id":14729},{"sent":"top right apple","ref_id":14730},{"sent":"left elephant","ref_id":14731},{"sent":"left elephant","ref_id":14732},{"sent":"elephant on right","ref_id":14733},{"sent":"right train","ref_id":14758},{"sent":"red train","ref_id":14759},{"sent":"broccoli","ref_id":14825},{"sent":"bottom left corner","ref_id":14826},{"sent":"left meter","ref_id":14846},{"sent":"right meter","ref_id":14847},{"sent":"middle fridge","ref_id":14848},{"sent":"left fridge","ref_id":14849},{"sent":"right fridge","ref_id":14850},{"sent":"red couch","ref_id":14942},{"sent":"right couch","ref_id":14943},{"sent":"bear on left","ref_id":14944},{"sent":"bear on the right","ref_id":14945},{"sent":"right giraffe","ref_id":14954},{"sent":"right giraffe","ref_id":14955},{"sent":"UNK","ref_id":15013},{"sent":"top cake","ref_id":15014},{"sent":"left sheep","ref_id":15022},{"sent":"white sheep","ref_id":15023},{"sent":"top right","ref_id":15024},{"sent":"top right sheep","ref_id":15025},{"sent":"sheep in front","ref_id":15026},{"sent":"top right broccoli","ref_id":15071},{"sent":"broccoli on right","ref_id":15072},{"sent":"broccoli in front","ref_id":15073},{"sent":"bike on right","ref_id":15135},{"sent":"bike on left","ref_id":15136},{"sent":"right bottle","ref_id":15241},{"sent":"right bottle","ref_id":15242},{"sent":"pizza on right","ref_id":15306},{"sent":"pizza on right","ref_id":15307},{"sent":"slice of pizza on left","ref_id":15308},{"sent":"top left slice","ref_id":15309},{"sent":"horse on left","ref_id":15310},{"sent":"horse on right","ref_id":15311},{"sent":"left bike","ref_id":15318},{"sent":"left bike","ref_id":15319},{"sent":"elephant on left","ref_id":15450},{"sent":"right elephant","ref_id":15451},{"sent":"elephant in front","ref_id":15452},{"sent":"elephant on left","ref_id":15469},{"sent":"right elephant","ref_id":15470},{"sent":"blue car","ref_id":15580},{"sent":"left car","ref_id":15581},{"sent":"pizza on right","ref_id":15686},{"sent":"pizza","ref_id":15687},{"sent":"chair on the left","ref_id":15741},{"sent":"left chair","ref_id":15742},{"sent":"bottom right","ref_id":15767},{"sent":"top hot dog","ref_id":15768},{"sent":"sandwich","ref_id":15877},{"sent":"left sandwich","ref_id":15878},{"sent":"left screen","ref_id":15975},{"sent":"left screen","ref_id":15976},{"sent":"right screen","ref_id":15977},{"sent":"bottom bench","ref_id":15984},{"sent":"right couch","ref_id":15985},{"sent":"left suitcase","ref_id":16044},{"sent":"second from left","ref_id":16045},{"sent":"red bus","ref_id":16049},{"sent":"bus on left","ref_id":16050},{"sent":"bus","ref_id":16051},{"sent":"second from right","ref_id":16238},{"sent":"second from left","ref_id":16239},{"sent":"bottom right corner","ref_id":16240},{"sent":"middle monitor","ref_id":16241},{"sent":"cow on right","ref_id":16348},{"sent":"white cow","ref_id":16349},{"sent":"cow on left","ref_id":16350},{"sent":"red bus on right","ref_id":16389},{"sent":"middle bus","ref_id":16390},{"sent":"left bear","ref_id":16394},{"sent":"bear on right","ref_id":16395},{"sent":"bear in middle","ref_id":16396},{"sent":"bottom left zebra","ref_id":16436},{"sent":"zebra in front","ref_id":16437},{"sent":"zebra on right","ref_id":16438},{"sent":"zebra in front","ref_id":16439},{"sent":"right side of cat","ref_id":16442},{"sent":"left one","ref_id":16465},{"sent":"chair on the right","ref_id":16477},{"sent":"right chair","ref_id":16478},{"sent":"chair on left","ref_id":16479},{"sent":"left chair","ref_id":16480},{"sent":"right giraffe","ref_id":16501},{"sent":"left giraffe","ref_id":16502},{"sent":"giraffe on left","ref_id":16503},{"sent":"left toothbrush","ref_id":16528},{"sent":"the UNK","ref_id":16529},{"sent":"screen","ref_id":16552},{"sent":"right book","ref_id":16553},{"sent":"bottom phone","ref_id":16554},{"sent":"right screen","ref_id":16555},{"sent":"right laptop","ref_id":16575},{"sent":"monitor on the right","ref_id":16576},{"sent":"white bus","ref_id":16622},{"sent":"yellow bus","ref_id":16623},{"sent":"right clock","ref_id":16642},{"sent":"clock on left","ref_id":16643},{"sent":"clock on right","ref_id":16644},{"sent":"bear in front","ref_id":16653},{"sent":"bear on right","ref_id":16654},{"sent":"woman","ref_id":16706},{"sent":"right half of person","ref_id":16707},{"sent":"bottom left corner","ref_id":16708},{"sent":"left side of table","ref_id":16709},{"sent":"bear on right","ref_id":16764},{"sent":"bear in middle","ref_id":16765},{"sent":"bear on left","ref_id":16766},{"sent":"right bottom corner","ref_id":16808},{"sent":"left bottom corner","ref_id":16809},{"sent":"right bottle","ref_id":16810},{"sent":"left bottle","ref_id":16811},{"sent":"second from right","ref_id":16812},{"sent":"second from right","ref_id":16813},{"sent":"second bottle from left","ref_id":16814},{"sent":"second bottle from left","ref_id":16815},{"sent":"second bottle from left","ref_id":16816},{"sent":"top dog","ref_id":16911},{"sent":"top hot dog","ref_id":16912},{"sent":"plant in front of the tree","ref_id":17021},{"sent":"green flowers","ref_id":17022},{"sent":"horse on right","ref_id":17140},{"sent":"horse on left","ref_id":17141},{"sent":"bottom left white cake","ref_id":17184},{"sent":"middle donut","ref_id":17185},{"sent":"right side of pic","ref_id":17301},{"sent":"bottom left corner","ref_id":17302},{"sent":"giraffe in back","ref_id":17382},{"sent":"right giraffe","ref_id":17383},{"sent":"white boat","ref_id":17427},{"sent":"left guy","ref_id":17428},{"sent":"boat on right","ref_id":17429},{"sent":"white boat","ref_id":17430},{"sent":"bed on left","ref_id":17585},{"sent":"bed","ref_id":17586},{"sent":"middle animal","ref_id":17806},{"sent":"chair in middle","ref_id":17822},{"sent":"chair on left","ref_id":17823},{"sent":"left bus","ref_id":17830},{"sent":"right bus","ref_id":17831},{"sent":"right bike","ref_id":17835},{"sent":"left bike","ref_id":17836},{"sent":"second bike from left","ref_id":17837},{"sent":"blue bike","ref_id":17838},{"sent":"red thing","ref_id":17894},{"sent":"red suitcase","ref_id":17895},{"sent":"cup on right","ref_id":18051},{"sent":"train on left","ref_id":18071},{"sent":"right train","ref_id":18072},{"sent":"kid on left","ref_id":18129},{"sent":"kid on right","ref_id":18130},{"sent":"right side of pic","ref_id":18131},{"sent":"white shirt","ref_id":18143},{"sent":"blond hair","ref_id":18144},{"sent":"right side of bike","ref_id":18171},{"sent":"left bike","ref_id":18172},{"sent":"right one","ref_id":18281},{"sent":"bird on left","ref_id":18282},{"sent":"pizza","ref_id":18295},{"sent":"top left bowl","ref_id":18296},{"sent":"right zebra","ref_id":18305},{"sent":"zebra in front","ref_id":18306},{"sent":"left elephant","ref_id":18443},{"sent":"baby elephant","ref_id":18444},{"sent":"left meter","ref_id":18462},{"sent":"right meter","ref_id":18463},{"sent":"left UNK","ref_id":18496},{"sent":"white book","ref_id":18497},{"sent":"giraffe in front","ref_id":18537},{"sent":"top giraffe","ref_id":18538},{"sent":"cat on left","ref_id":18681},{"sent":"cat on right","ref_id":18682},{"sent":"left sheep","ref_id":18698},{"sent":"left sheep","ref_id":18699},{"sent":"baby","ref_id":18700},{"sent":"right animal","ref_id":18726},{"sent":"right sheep","ref_id":18727},{"sent":"right slice","ref_id":18736},{"sent":"left pizza","ref_id":18737},{"sent":"giraffe on left","ref_id":18906},{"sent":"left giraffe","ref_id":18907},{"sent":"top left book","ref_id":18927},{"sent":"right horse","ref_id":19026},{"sent":"middle horse","ref_id":19027},{"sent":"top right bowl","ref_id":19032},{"sent":"bottom left bowl","ref_id":19033},{"sent":"bowl","ref_id":19034},{"sent":"bottom right bowl","ref_id":19035},{"sent":"oranges","ref_id":19125},{"sent":"oranges on left","ref_id":19126},{"sent":"orange in front","ref_id":19127},{"sent":"left side of the orange","ref_id":19249},{"sent":"white car on right","ref_id":19250},{"sent":"right book","ref_id":19533},{"sent":"bottom book","ref_id":19534},{"sent":"bottom book","ref_id":19535},{"sent":"UNK","ref_id":19536},{"sent":"bananas on right","ref_id":19589},{"sent":"left banana","ref_id":19590},{"sent":"chair on right","ref_id":19594},{"sent":"chair on right","ref_id":19595},{"sent":"left bear","ref_id":19626},{"sent":"right bear","ref_id":19627},{"sent":"left couch","ref_id":19651},{"sent":"bed on right","ref_id":19652},{"sent":"top left bowl","ref_id":19653},{"sent":"top right bowl","ref_id":19654},{"sent":"bottom left bowl","ref_id":19655},{"sent":"bottom right","ref_id":19656},{"sent":"chair on right","ref_id":19839},{"sent":"front bench","ref_id":19840},{"sent":"red boat","ref_id":19990},{"sent":"white plane on left","ref_id":19991},{"sent":"right train","ref_id":20030},{"sent":"train","ref_id":20031},{"sent":"left glass","ref_id":20032},{"sent":"middle glass","ref_id":20033},{"sent":"glass on right","ref_id":20034},{"sent":"middle animal","ref_id":20279},{"sent":"left cow","ref_id":20280},{"sent":"cow in middle","ref_id":20281},{"sent":"right dog","ref_id":20316},{"sent":"bottom right corner","ref_id":20317},{"sent":"black suitcase","ref_id":20318},{"sent":"top right car","ref_id":20331},{"sent":"red car","ref_id":20332},{"sent":"bottom suitcase","ref_id":20673},{"sent":"red suitcase","ref_id":20674},{"sent":"top suitcase","ref_id":20675},{"sent":"bottom suitcase","ref_id":20676},{"sent":"top right dog","ref_id":20733},{"sent":"cat on the right","ref_id":20734},{"sent":"car on left","ref_id":20793},{"sent":"car on the left","ref_id":20794},{"sent":"red car","ref_id":20865},{"sent":"white car","ref_id":20866},{"sent":"red","ref_id":20925},{"sent":"boat on right","ref_id":20926},{"sent":"bottom right corner","ref_id":20927},{"sent":"top right donut","ref_id":20981},{"sent":"right donut","ref_id":20982},{"sent":"white plate on the right","ref_id":21023},{"sent":"white plate on right","ref_id":21024},{"sent":"chair on right","ref_id":21053},{"sent":"couch","ref_id":21054},{"sent":"sandwich on left","ref_id":21162},{"sent":"sandwich on right","ref_id":21163},{"sent":"right couch","ref_id":21235},{"sent":"right bed","ref_id":21236},{"sent":"bottom right corner","ref_id":21256},{"sent":"bottom left UNK","ref_id":21257},{"sent":"bed on left","ref_id":21288},{"sent":"bed","ref_id":21289},{"sent":"bottom suitcase","ref_id":21638},{"sent":"left carrot","ref_id":21639},{"sent":"UNK","ref_id":21716},{"sent":"top right book","ref_id":21717},{"sent":"right bird","ref_id":21748},{"sent":"duck","ref_id":21749},{"sent":"chair on right","ref_id":21825},{"sent":"chair on left","ref_id":21826},{"sent":"glass on right","ref_id":21925},{"sent":"cup on right","ref_id":21926},{"sent":"left zebra","ref_id":21957},{"sent":"right zebra","ref_id":21958},{"sent":"horse on left","ref_id":22022},{"sent":"horse in front","ref_id":22023},{"sent":"left keyboard","ref_id":22040},{"sent":"keyboard on the right","ref_id":22041},{"sent":"black computer right","ref_id":22042},{"sent":"right monitor","ref_id":22043},{"sent":"top left pizza","ref_id":22050},{"sent":"right slice","ref_id":22051},{"sent":"middle piece of food","ref_id":22163},{"sent":"bottom right corner","ref_id":22164},{"sent":"table in front","ref_id":22289},{"sent":"bed in front","ref_id":22290},{"sent":"orange on top of orange","ref_id":22324},{"sent":"orange bottom left","ref_id":22325},{"sent":"white car","ref_id":22382},{"sent":"white car in back","ref_id":22383},{"sent":"white car","ref_id":22384},{"sent":"bottom right orange","ref_id":22434},{"sent":"bottom right corner","ref_id":22435},{"sent":"bottom orange","ref_id":22436},{"sent":"top left apple","ref_id":22437},{"sent":"bottom left apple","ref_id":22438},{"sent":"right train","ref_id":22473},{"sent":"left train","ref_id":22474},{"sent":"couch on left","ref_id":22576},{"sent":"right couch","ref_id":22577},{"sent":"right giraffe","ref_id":22596},{"sent":"left giraffe","ref_id":22597},{"sent":"right zebra","ref_id":22630},{"sent":"black bag on right","ref_id":22656},{"sent":"left blue bag","ref_id":22657},{"sent":"right seat","ref_id":22658},{"sent":"oranges in front","ref_id":22723},{"sent":"chair on right","ref_id":22754},{"sent":"bed on right","ref_id":22755},{"sent":"chair bottom right","ref_id":22756},{"sent":"left sheep","ref_id":22773},{"sent":"right sheep","ref_id":22774},{"sent":"right edge of pic","ref_id":22859},{"sent":"the woman in the middle","ref_id":22860},{"sent":"woman in middle","ref_id":22861},{"sent":"left vase","ref_id":22933},{"sent":"vase","ref_id":22934},{"sent":"left vase","ref_id":22935},{"sent":"blue car on left","ref_id":22943},{"sent":"car on right","ref_id":22944},{"sent":"black cat","ref_id":22945},{"sent":"red book","ref_id":22946},{"sent":"left monitor","ref_id":22966},{"sent":"right monitor","ref_id":22967},{"sent":"left bench","ref_id":23040},{"sent":"right couch","ref_id":23041},{"sent":"orange","ref_id":23088},{"sent":"orange top right","ref_id":23089},{"sent":"orange","ref_id":23090},{"sent":"orange","ref_id":23091},{"sent":"top right corner","ref_id":23092},{"sent":"top left apples","ref_id":23151},{"sent":"middle row second from right","ref_id":23152},{"sent":"middle row second from right","ref_id":23153},{"sent":"broccoli on right","ref_id":23182},{"sent":"broccoli on the right","ref_id":23183},{"sent":"broccoli in middle","ref_id":23184},{"sent":"middle row second from bottom","ref_id":23185},{"sent":"left banana","ref_id":23297},{"sent":"left hot dog","ref_id":23298},{"sent":"the UNK","ref_id":23313},{"sent":"top of train","ref_id":23314},{"sent":"right giraffe","ref_id":23347},{"sent":"baby","ref_id":23362},{"sent":"baby","ref_id":23363},{"sent":"right zebra","ref_id":23469},{"sent":"right zebra","ref_id":23470},{"sent":"left zebra","ref_id":23471},{"sent":"giraffe on left","ref_id":23509},{"sent":"right giraffe","ref_id":23510},{"sent":"left cow","ref_id":23569},{"sent":"cow in middle","ref_id":23570},{"sent":"cow in middle","ref_id":23571},{"sent":"chair on left","ref_id":23583},{"sent":"bottom right corner","ref_id":23584},{"sent":"left hotdog","ref_id":23603},{"sent":"top right corner","ref_id":23604},{"sent":"cat on left","ref_id":23659},{"sent":"cat on left","ref_id":23660},{"sent":"cat on the left","ref_id":23661},{"sent":"elephant on left","ref_id":23721},{"sent":"elephant on right","ref_id":23722},{"sent":"left horse","ref_id":23797},{"sent":"horse on right","ref_id":23798},{"sent":"bottle in middle","ref_id":23810},{"sent":"bottle on right","ref_id":23811},{"sent":"bottle on the left","ref_id":23812},{"sent":"bottle on left","ref_id":23813},{"sent":"bottle on left","ref_id":23814},{"sent":"top right piece of broccoli","ref_id":23878},{"sent":"left piece of food","ref_id":23879},{"sent":"broccoli on left","ref_id":23880},{"sent":"right piece of food","ref_id":23881},{"sent":"right","ref_id":23882},{"sent":"red thing","ref_id":23883},{"sent":"suitcase on the right","ref_id":24098},{"sent":"suitcase on left","ref_id":24099},{"sent":"left bear","ref_id":24120},{"sent":"right bear","ref_id":24121},{"sent":"right bear","ref_id":24122},{"sent":"right bear","ref_id":24123},{"sent":"left bear","ref_id":24124},{"sent":"baby elephant","ref_id":24187},{"sent":"chair in front of man","ref_id":24192},{"sent":"laptop on left","ref_id":24193},{"sent":"laptop on right","ref_id":24194},{"sent":"UNK","ref_id":24224},{"sent":"top right book","ref_id":24225},{"sent":"right most UNK","ref_id":24274},{"sent":"second glass from right","ref_id":24275},{"sent":"second glass from left","ref_id":24276},{"sent":"umbrella on left","ref_id":24402},{"sent":"top right umbrella","ref_id":24403},{"sent":"green bus on left","ref_id":24448},{"sent":"bus in front","ref_id":24449},{"sent":"second from right","ref_id":24503},{"sent":"second from left","ref_id":24504},{"sent":"left train","ref_id":24505},{"sent":"second from left","ref_id":24506},{"sent":"cat on the right","ref_id":24523},{"sent":"cat on the right","ref_id":24524},{"sent":"bottom left corner","ref_id":24573},{"sent":"right bottom corner","ref_id":24574},{"sent":"right horse","ref_id":24588},{"sent":"left horse","ref_id":24589},{"sent":"right umbrella","ref_id":24604},{"sent":"top right umbrella","ref_id":24605},{"sent":"umbrella on left","ref_id":24606},{"sent":"cow on right","ref_id":24684},{"sent":"cow on left","ref_id":24685},{"sent":"cow on right","ref_id":24686},{"sent":"bottom carrot","ref_id":24687},{"sent":"top left piece of food","ref_id":24688},{"sent":"top donut","ref_id":24778},{"sent":"top left hot dog","ref_id":24779},{"sent":"bike on right","ref_id":24859},{"sent":"bike","ref_id":24860},{"sent":"white UNK","ref_id":24943},{"sent":"white vase","ref_id":24944},{"sent":"white UNK","ref_id":24945},{"sent":"the UNK","ref_id":24946},{"sent":"right UNK","ref_id":25053},{"sent":"right meter","ref_id":25054},{"sent":"middle meter","ref_id":25055},{"sent":"bowl on left","ref_id":25137},{"sent":"dog on left","ref_id":25151},{"sent":"red dog","ref_id":25152},{"sent":"left monitor","ref_id":25302},{"sent":"right monitor","ref_id":25303},{"sent":"right most vase","ref_id":25313},{"sent":"vase on left","ref_id":25314},{"sent":"white book","ref_id":25336},{"sent":"horse on right","ref_id":25342},{"sent":"green stuff","ref_id":25445},{"sent":"green stuff","ref_id":25446},{"sent":"right couch","ref_id":25504},{"sent":"couch","ref_id":25505},{"sent":"bear in front","ref_id":25659},{"sent":"bear","ref_id":25660},{"sent":"white bear","ref_id":25694},{"sent":"bear","ref_id":25695},{"sent":"red bus","ref_id":25717},{"sent":"left bus","ref_id":25718},{"sent":"red bus on right","ref_id":25719},{"sent":"toilet in front","ref_id":25762},{"sent":"sink on the right","ref_id":25763},{"sent":"apple on left","ref_id":25788},{"sent":"right slice","ref_id":25789},{"sent":"glass on left","ref_id":25826},{"sent":"glass on right","ref_id":25827},{"sent":"right elephant","ref_id":25831},{"sent":"elephant on right","ref_id":25832},{"sent":"top right microwave","ref_id":25888},{"sent":"left monitor","ref_id":25889},{"sent":"broccoli on left","ref_id":26005},{"sent":"broccoli on right","ref_id":26006},{"sent":"broccoli in middle","ref_id":26007},{"sent":"bottom oven","ref_id":26157},{"sent":"bottom oven","ref_id":26158},{"sent":"keyboard","ref_id":26159},{"sent":"white keyboard","ref_id":26160},{"sent":"bottom left corner","ref_id":26344},{"sent":"left chair","ref_id":26345},{"sent":"couch","ref_id":26346},{"sent":"couch","ref_id":26347},{"sent":"left banana","ref_id":26384},{"sent":"banana in the back","ref_id":26385},{"sent":"boat in front","ref_id":26447},{"sent":"left boat","ref_id":26448},{"sent":"boat in front","ref_id":26449},{"sent":"middle boat","ref_id":26450},{"sent":"white truck","ref_id":26513},{"sent":"white truck","ref_id":26514},{"sent":"white truck","ref_id":26515},{"sent":"left bike","ref_id":26528},{"sent":"front bike","ref_id":26529},{"sent":"red chair","ref_id":26601},{"sent":"red chair","ref_id":26602},{"sent":"bird on right","ref_id":26618},{"sent":"bird on right","ref_id":26619},{"sent":"bear in front","ref_id":26825},{"sent":"right bear","ref_id":26826},{"sent":"left bus","ref_id":26844},{"sent":"bus in front","ref_id":26845},{"sent":"red light","ref_id":27005},{"sent":"traffic light","ref_id":27006},{"sent":"traffic light","ref_id":27007},{"sent":"middle giraffe","ref_id":27130},{"sent":"left woman","ref_id":27131},{"sent":"right bear","ref_id":27214},{"sent":"left bear","ref_id":27215},{"sent":"right cat","ref_id":27232},{"sent":"cat on left","ref_id":27233},{"sent":"zebra in front","ref_id":27247},{"sent":"right zebra","ref_id":27248},{"sent":"left zebra","ref_id":27249},{"sent":"right car","ref_id":27250},{"sent":"white car","ref_id":27251},{"sent":"top left microwave","ref_id":27288},{"sent":"microwave on right","ref_id":27289},{"sent":"toilet on left","ref_id":27314},{"sent":"toilet","ref_id":27315},{"sent":"sheep on right","ref_id":27373},{"sent":"sheep in front","ref_id":27374},{"sent":"left sandwich","ref_id":27432},{"sent":"right sandwich","ref_id":27433},{"sent":"cat on right","ref_id":27465},{"sent":"cat","ref_id":27466},{"sent":"yellow toothbrush","ref_id":27526},{"sent":"bottom brush","ref_id":27527},{"sent":"top right pizza","ref_id":27572},{"sent":"pizza","ref_id":27573},{"sent":"bottom left carrot","ref_id":27751},{"sent":"sandwich on right","ref_id":27796},{"sent":"sandwich on the left","ref_id":27797},{"sent":"left slice","ref_id":27848},{"sent":"right side of pizza","ref_id":27849},{"sent":"bottom menu","ref_id":27880},{"sent":"top book","ref_id":27881},{"sent":"book on right","ref_id":27882},{"sent":"keyboard on right","ref_id":27883},{"sent":"book in middle","ref_id":27935},{"sent":"UNK book","ref_id":27936},{"sent":"bowl of food","ref_id":27973},{"sent":"giraffe in front","ref_id":28408},{"sent":"left giraffe","ref_id":28409},{"sent":"left flower vase","ref_id":28439},{"sent":"right plant","ref_id":28440},{"sent":"left vase","ref_id":28441},{"sent":"right vase","ref_id":28442},{"sent":"train","ref_id":28852},{"sent":"top right corner","ref_id":28853},{"sent":"top right chair","ref_id":28854},{"sent":"cat on the right","ref_id":29103},{"sent":"cat on right","ref_id":29104},{"sent":"bottom left oven","ref_id":29105},{"sent":"right sink","ref_id":29106},{"sent":"middle giraffe","ref_id":29153},{"sent":"giraffe in front","ref_id":29154},{"sent":"dog in front","ref_id":29238},{"sent":"dog on left","ref_id":29239},{"sent":"front plate","ref_id":29270},{"sent":"right slice","ref_id":29271},{"sent":"baby","ref_id":29301},{"sent":"elephant","ref_id":29302},{"sent":"bottom right dish","ref_id":29360},{"sent":"bottom plate","ref_id":29361},{"sent":"giraffe on right","ref_id":29385},{"sent":"giraffe on right","ref_id":29386},{"sent":"giraffe in front","ref_id":29387},{"sent":"left giraffe","ref_id":29388},{"sent":"left giraffe","ref_id":29389},{"sent":"left bear","ref_id":29460},{"sent":"white bear","ref_id":29461},{"sent":"big bear","ref_id":29462},{"sent":"bowl of soup","ref_id":29569},{"sent":"white cup","ref_id":29570},{"sent":"white car on right","ref_id":29575},{"sent":"truck","ref_id":29576},{"sent":"second bike from left","ref_id":29625},{"sent":"second bike from left","ref_id":29626},{"sent":"bike on right","ref_id":29627},{"sent":"left train","ref_id":29630},{"sent":"left plane","ref_id":29631},{"sent":"left bench","ref_id":29856},{"sent":"bench in front","ref_id":29857},{"sent":"top right umbrella","ref_id":29920},{"sent":"top left umbrella","ref_id":29921},{"sent":"guy in back","ref_id":29964},{"sent":"bus","ref_id":29967},{"sent":"white car","ref_id":29974},{"sent":"car on left","ref_id":29975},{"sent":"middle screen","ref_id":30281},{"sent":"left monitor","ref_id":30282},{"sent":"elephant on left","ref_id":30393},{"sent":"right elephant","ref_id":30394},{"sent":"bed on left","ref_id":30401},{"sent":"bottom bed","ref_id":30402},{"sent":"left bed","ref_id":30403},{"sent":"plant on left","ref_id":30480},{"sent":"vase","ref_id":30481},{"sent":"left pot","ref_id":30482},{"sent":"cow in back","ref_id":30529},{"sent":"cow","ref_id":30530},{"sent":"bottom left suitcase","ref_id":30631},{"sent":"black suitcase","ref_id":30632},{"sent":"right suitcase","ref_id":30633},{"sent":"black suitcase","ref_id":30634},{"sent":"black cat","ref_id":30699},{"sent":"cat on left","ref_id":30700},{"sent":"bear on left","ref_id":30701},{"sent":"bear","ref_id":30702},{"sent":"motorcycle on right","ref_id":30719},{"sent":"front bike","ref_id":30720},{"sent":"front left bike","ref_id":30721},{"sent":"right dish","ref_id":30813},{"sent":"left bowl","ref_id":30814},{"sent":"cat on left","ref_id":30839},{"sent":"cat on the left","ref_id":30840},{"sent":"truck","ref_id":30869},{"sent":"white truck","ref_id":30870},{"sent":"glass with red liquid","ref_id":30970},{"sent":"glass on right","ref_id":30971},{"sent":"right meter","ref_id":30996},{"sent":"left meter","ref_id":30997},{"sent":"right screen","ref_id":31025},{"sent":"left monitor","ref_id":31026},{"sent":"giraffe in back","ref_id":31114},{"sent":"right giraffe","ref_id":31115},{"sent":"left sheep","ref_id":31161},{"sent":"right sheep","ref_id":31162},{"sent":"sandwich on right","ref_id":31324},{"sent":"sandwich on left","ref_id":31325},{"sent":"cup on right","ref_id":31373},{"sent":"top right cup","ref_id":31374},{"sent":"bowl of food on left","ref_id":31375},{"sent":"cup of coffee","ref_id":31376},{"sent":"cup on right","ref_id":31377},{"sent":"bowl of UNK","ref_id":31378},{"sent":"horse on left","ref_id":31391},{"sent":"horse on right","ref_id":31392},{"sent":"sheep on right","ref_id":31393},{"sent":"sheep in back","ref_id":31394},{"sent":"sheep on right","ref_id":31395},{"sent":"bottom right corner","ref_id":31396},{"sent":"bottom left sheep","ref_id":31397},{"sent":"bus on right","ref_id":31558},{"sent":"bus in front","ref_id":31559},{"sent":"left bus","ref_id":31560},{"sent":"front vase","ref_id":31579},{"sent":"vase on left","ref_id":31580},{"sent":"right vase","ref_id":31581},{"sent":"bike on right","ref_id":31594},{"sent":"red bike","ref_id":31595},{"sent":"red bike","ref_id":31596},{"sent":"truck on right","ref_id":31619},{"sent":"truck","ref_id":31620},{"sent":"right sandwich","ref_id":31687},{"sent":"left sandwich","ref_id":31688},{"sent":"white thing on right","ref_id":31703},{"sent":"horse on left","ref_id":31706},{"sent":"giraffe in middle","ref_id":31729},{"sent":"giraffe in front","ref_id":31730},{"sent":"right sheep","ref_id":31736},{"sent":"left sheep","ref_id":31737},{"sent":"right animal","ref_id":31758},{"sent":"left sheep","ref_id":31759},{"sent":"meter on the right","ref_id":31778},{"sent":"right meter","ref_id":31779},{"sent":"sheep in front","ref_id":31897},{"sent":"sheep in front","ref_id":31898},{"sent":"right sheep","ref_id":31899},{"sent":"left donut","ref_id":31960},{"sent":"right donut","ref_id":31961},{"sent":"umbrella on left","ref_id":31981},{"sent":"umbrella","ref_id":31982},{"sent":"elephant on left","ref_id":32094},{"sent":"elephant on right","ref_id":32095},{"sent":"right sandwich","ref_id":32165},{"sent":"left hot dog","ref_id":32166},{"sent":"slice of pizza","ref_id":32214},{"sent":"top oven","ref_id":32265},{"sent":"stove","ref_id":32266},{"sent":"motorcycle in front","ref_id":32311},{"sent":"front bike","ref_id":32312},{"sent":"left sandwich","ref_id":32362},{"sent":"bottom left food","ref_id":32363},{"sent":"right sandwich","ref_id":32364},{"sent":"bottom left bread","ref_id":32365},{"sent":"hand","ref_id":32370},{"sent":"hand","ref_id":32371},{"sent":"right screen","ref_id":32572},{"sent":"left monitor","ref_id":32573},{"sent":"dog on right","ref_id":32642},{"sent":"dog on left","ref_id":32643},{"sent":"zebra in back","ref_id":32928},{"sent":"zebra in front","ref_id":32929},{"sent":"left glass","ref_id":32956},{"sent":"person in back","ref_id":32957},{"sent":"pizza slice on top","ref_id":33014},{"sent":"pizza slice on right","ref_id":33015},{"sent":"pizza slice on right","ref_id":33016},{"sent":"bottom left pizza","ref_id":33017},{"sent":"top pizza","ref_id":33018},{"sent":"left UNK","ref_id":33237},{"sent":"bottom right corner","ref_id":33238},{"sent":"the little UNK","ref_id":33239},{"sent":"red vase","ref_id":33240},{"sent":"the little UNK","ref_id":33241},{"sent":"left vase","ref_id":33242},{"sent":"cat on right","ref_id":33291},{"sent":"left cat","ref_id":33292},{"sent":"zebra in front","ref_id":33439},{"sent":"right light","ref_id":33455},{"sent":"pizza on right","ref_id":33470},{"sent":"bottom left slice","ref_id":33471},{"sent":"right elephant","ref_id":33500},{"sent":"elephant on left","ref_id":33501},{"sent":"bottom donut","ref_id":33626},{"sent":"donut on right","ref_id":33627},{"sent":"donut on left","ref_id":33628},{"sent":"zebra on left","ref_id":33639},{"sent":"right train","ref_id":33681},{"sent":"left train","ref_id":33682},{"sent":"right bus","ref_id":33683},{"sent":"chair on right","ref_id":33684},{"sent":"top right dog","ref_id":33685},{"sent":"cat on left","ref_id":33686},{"sent":"red bike","ref_id":33714},{"sent":"front bike","ref_id":33715},{"sent":"bottom left cup","ref_id":33800},{"sent":"cup","ref_id":33801},{"sent":"elephant in front","ref_id":33806},{"sent":"elephant on right","ref_id":33807},{"sent":"bottom left bowl","ref_id":33829},{"sent":"bottom left cup","ref_id":33830},{"sent":"bowl of rice in back right","ref_id":33831},{"sent":"broccoli on left","ref_id":33914},{"sent":"bottom left broccoli","ref_id":33915},{"sent":"middle donut","ref_id":33952},{"sent":"middle donut","ref_id":33953},{"sent":"second from right","ref_id":33992},{"sent":"second from right","ref_id":33993},{"sent":"second from right","ref_id":33994},{"sent":"right carrot","ref_id":33995},{"sent":"right side of food","ref_id":33996},{"sent":"right most carrot","ref_id":33997},{"sent":"UNK","ref_id":34321},{"sent":"glass of water","ref_id":34322},{"sent":"elephant on right","ref_id":34631},{"sent":"elephant in front","ref_id":34632},{"sent":"top left food","ref_id":34787},{"sent":"top left food","ref_id":34788},{"sent":"left umbrella","ref_id":34858},{"sent":"left umbrella","ref_id":34859},{"sent":"bottom right bowl","ref_id":34895},{"sent":"car in front of the cart","ref_id":34943},{"sent":"car on left","ref_id":34944},{"sent":"pizza on right","ref_id":34998},{"sent":"pizza slice on left","ref_id":34999},{"sent":"UNK","ref_id":35034},{"sent":"left monitor","ref_id":35035},{"sent":"left keyboard","ref_id":35090},{"sent":"laptop on left","ref_id":35091},{"sent":"top left laptop","ref_id":35121},{"sent":"top left chair","ref_id":35122},{"sent":"chair on right","ref_id":35148},{"sent":"couch on right","ref_id":35149},{"sent":"broccoli on right","ref_id":35188},{"sent":"broccoli on the right","ref_id":35189},{"sent":"broccoli on left","ref_id":35190},{"sent":"broccoli in middle","ref_id":35191},{"sent":"middle bird","ref_id":35194},{"sent":"left cat","ref_id":35195},{"sent":"bird on left","ref_id":35217},{"sent":"bird on the right","ref_id":35218},{"sent":"black bag on left","ref_id":35368},{"sent":"black bag on top of suitcase","ref_id":35369},{"sent":"right suitcase","ref_id":35370},{"sent":"right suitcase","ref_id":35371},{"sent":"black suitcase on top of suitcase","ref_id":35372},{"sent":"top right orange","ref_id":35377},{"sent":"middle row second from right","ref_id":35378},{"sent":"second from left","ref_id":35379},{"sent":"stove top right","ref_id":35391},{"sent":"oven","ref_id":35392},{"sent":"left girl","ref_id":35420},{"sent":"chair on right","ref_id":35421},{"sent":"top right apple","ref_id":35522},{"sent":"orange","ref_id":35523},{"sent":"orange bottom right","ref_id":35524},{"sent":"bottom right apple","ref_id":35525},{"sent":"top apple","ref_id":35526},{"sent":"right suitcase","ref_id":35764},{"sent":"right bag","ref_id":35765},{"sent":"left couch","ref_id":35833},{"sent":"right couch","ref_id":35834},{"sent":"right zebra","ref_id":35913},{"sent":"left zebra","ref_id":35914},{"sent":"green plant","ref_id":36001},{"sent":"bottom right corner","ref_id":36002},{"sent":"cat on right","ref_id":36082},{"sent":"cat","ref_id":36083},{"sent":"white truck","ref_id":36111},{"sent":"white truck","ref_id":36112},{"sent":"right zebra","ref_id":36209},{"sent":"left zebra","ref_id":36210},{"sent":"bottom right food","ref_id":36224},{"sent":"right pizza","ref_id":36225},{"sent":"top right banana","ref_id":36279},{"sent":"bottom left apple","ref_id":36280},{"sent":"banana in middle","ref_id":36281},{"sent":"UNK","ref_id":36365},{"sent":"top left umbrella","ref_id":36366},{"sent":"bottom right microwave","ref_id":36432},{"sent":"bottom right corner","ref_id":36433},{"sent":"left monitor","ref_id":36434},{"sent":"top right microwave","ref_id":36435},{"sent":"left microwave","ref_id":36436},{"sent":"sandwich","ref_id":36689},{"sent":"top sandwich","ref_id":36690},{"sent":"bottom left orange","ref_id":36725},{"sent":"orange peel","ref_id":36726},{"sent":"right glass","ref_id":36762},{"sent":"right glass","ref_id":36763},{"sent":"glass on left","ref_id":36764},{"sent":"wine bottle on right","ref_id":36789},{"sent":"left glass","ref_id":36790},{"sent":"right giraffe","ref_id":36894},{"sent":"left giraffe","ref_id":36895},{"sent":"red sweater","ref_id":36900},{"sent":"bear on right","ref_id":36901},{"sent":"green and white UNK","ref_id":36902},{"sent":"bear on right","ref_id":36903},{"sent":"right animal","ref_id":37150},{"sent":"right animal","ref_id":37151},{"sent":"chair in middle","ref_id":37201},{"sent":"right chair","ref_id":37202},{"sent":"left chair","ref_id":37203},{"sent":"white plane","ref_id":37213},{"sent":"plane in front","ref_id":37214},{"sent":"bottom left corner","ref_id":37252},{"sent":"vase on right","ref_id":37253},{"sent":"right vase","ref_id":37254},{"sent":"UNK","ref_id":37255},{"sent":"bananas on left","ref_id":37278},{"sent":"right bunch","ref_id":37279},{"sent":"slice of pizza in front","ref_id":37540},{"sent":"pizza","ref_id":37541},{"sent":"white umbrella","ref_id":37572},{"sent":"bottom right corner","ref_id":37573},{"sent":"left bowl","ref_id":37650},{"sent":"broccoli","ref_id":37651},{"sent":"top right bowl","ref_id":37652},{"sent":"bowl of UNK","ref_id":37661},{"sent":"bowl of UNK","ref_id":37662},{"sent":"bowl of rice","ref_id":37663},{"sent":"black suitcase on left","ref_id":37710},{"sent":"suitcase on left","ref_id":37711},{"sent":"right suitcase","ref_id":37712},{"sent":"zebra on left","ref_id":37749},{"sent":"zebra in back","ref_id":37750},{"sent":"bottom left corner","ref_id":37802},{"sent":"middle cake","ref_id":37803},{"sent":"bed on right","ref_id":37879},{"sent":"couch on left","ref_id":37880},{"sent":"couch on left","ref_id":37895},{"sent":"couch","ref_id":37896},{"sent":"bottom right corner","ref_id":37933},{"sent":"cow in front","ref_id":37963},{"sent":"black cow","ref_id":37964},{"sent":"bottom phone","ref_id":38029},{"sent":"right phone","ref_id":38030},{"sent":"banana on top","ref_id":38151},{"sent":"top left sandwich","ref_id":38266},{"sent":"sandwich","ref_id":38267},{"sent":"food in front","ref_id":38268},{"sent":"second from right","ref_id":38333},{"sent":"second board from right","ref_id":38334},{"sent":"left board","ref_id":38335},{"sent":"right","ref_id":38368},{"sent":"bottom left","ref_id":38369},{"sent":"top left dish","ref_id":38370},{"sent":"carrots","ref_id":38371},{"sent":"right bear","ref_id":38388},{"sent":"left bear","ref_id":38389},{"sent":"top dog","ref_id":38567},{"sent":"dog","ref_id":38568},{"sent":"clock face","ref_id":38601},{"sent":"clock on left","ref_id":38602},{"sent":"clock on right","ref_id":38603},{"sent":"white plane","ref_id":38647},{"sent":"bed on right","ref_id":38648},{"sent":"bed","ref_id":38649},{"sent":"top right corner","ref_id":38720},{"sent":"right bottom corner","ref_id":38721},{"sent":"top dog","ref_id":38779},{"sent":"left dog","ref_id":38780},{"sent":"top right dog","ref_id":38781},{"sent":"right glass","ref_id":38783},{"sent":"chair on right","ref_id":38903},{"sent":"red shirt","ref_id":38919},{"sent":"top right corner","ref_id":38920},{"sent":"right","ref_id":39016},{"sent":"left vase","ref_id":39017},{"sent":"right bottle","ref_id":39018},{"sent":"left elephant","ref_id":39024},{"sent":"elephant on right","ref_id":39025},{"sent":"left elephant","ref_id":39026},{"sent":"giraffe in middle","ref_id":39045},{"sent":"left train","ref_id":39150},{"sent":"front train","ref_id":39151},{"sent":"cat on the left","ref_id":39159},{"sent":"top cat","ref_id":39160},{"sent":"left suitcase","ref_id":39206},{"sent":"right","ref_id":39207},{"sent":"giraffe in front","ref_id":39387},{"sent":"giraffe in front","ref_id":39388},{"sent":"white teddy bear on right","ref_id":39400},{"sent":"teddy bear in middle","ref_id":39401},{"sent":"bear on left","ref_id":39402},{"sent":"bottom right bear","ref_id":39403},{"sent":"bear on left","ref_id":39404},{"sent":"bear on right","ref_id":39405},{"sent":"left elephant","ref_id":39448},{"sent":"elephant in front","ref_id":39449},{"sent":"cow on left","ref_id":39456},{"sent":"cow on right","ref_id":39457},{"sent":"right sandwich","ref_id":39460},{"sent":"sandwich on left","ref_id":39461},{"sent":"left chair","ref_id":39765},{"sent":"chair on left","ref_id":39766},{"sent":"left cow","ref_id":39797},{"sent":"cow on right","ref_id":39798},{"sent":"green apple","ref_id":39815},{"sent":"green apple","ref_id":39816},{"sent":"green apple","ref_id":39817},{"sent":"right zebra","ref_id":39850},{"sent":"zebra in front","ref_id":39851},{"sent":"right pizza","ref_id":39883},{"sent":"pizza in front","ref_id":39884},{"sent":"pizza on left","ref_id":39885},{"sent":"right laptop","ref_id":39891},{"sent":"left laptop","ref_id":39892},{"sent":"left truck","ref_id":39963},{"sent":"red truck","ref_id":39964},{"sent":"top left orange","ref_id":39965},{"sent":"orange","ref_id":39966},{"sent":"cat on left","ref_id":40063},{"sent":"cat on right","ref_id":40064},{"sent":"bed","ref_id":40188},{"sent":"bed on left","ref_id":40189},{"sent":"top right microwave","ref_id":40211},{"sent":"middle UNK","ref_id":40212},{"sent":"right phone","ref_id":40213},{"sent":"left cake","ref_id":40235},{"sent":"left cake","ref_id":40236},{"sent":"right cake","ref_id":40237},{"sent":"bottom sandwich","ref_id":40238},{"sent":"left most seat","ref_id":40248},{"sent":"left suitcase","ref_id":40249},{"sent":"bottom right bowl","ref_id":40287},{"sent":"top right bowl","ref_id":40288},{"sent":"table","ref_id":40350},{"sent":"table behind the table","ref_id":40351},{"sent":"left person","ref_id":40358},{"sent":"cat on right","ref_id":40400},{"sent":"left cat","ref_id":40401},{"sent":"the plant","ref_id":40456},{"sent":"left bunch","ref_id":40457},{"sent":"right dog","ref_id":40458},{"sent":"dog on left","ref_id":40459},{"sent":"left bike","ref_id":40479},{"sent":"front bike","ref_id":40480},{"sent":"bike on right","ref_id":40500},{"sent":"bike in front","ref_id":40501},{"sent":"bike on left","ref_id":40502},{"sent":"top bowl","ref_id":40554},{"sent":"right sandwich","ref_id":40555},{"sent":"cow in front","ref_id":40571},{"sent":"cow on right","ref_id":40572},{"sent":"left meter","ref_id":40753},{"sent":"left car","ref_id":40754},{"sent":"bottom left orange","ref_id":40762},{"sent":"orange","ref_id":40763},{"sent":"left giraffe","ref_id":40804},{"sent":"giraffe on right","ref_id":40805},{"sent":"bottom left fruit","ref_id":40810},{"sent":"banana slice in the middle","ref_id":40811},{"sent":"banana on right","ref_id":40812},{"sent":"toilet","ref_id":40909},{"sent":"toilet","ref_id":40910},{"sent":"second from right","ref_id":40945},{"sent":"second row from right","ref_id":40946},{"sent":"second from right","ref_id":40947},{"sent":"second banana from left","ref_id":40948},{"sent":"second row from left","ref_id":40949},{"sent":"left elephant","ref_id":41094},{"sent":"baby elephant","ref_id":41095},{"sent":"right cow","ref_id":41136},{"sent":"giraffe in front","ref_id":41137},{"sent":"chair on right","ref_id":41148},{"sent":"bed","ref_id":41167},{"sent":"right bed","ref_id":41168},{"sent":"left screen","ref_id":41173},{"sent":"right monitor","ref_id":41174},{"sent":"left white chair","ref_id":41197},{"sent":"chair on right","ref_id":41198},{"sent":"UNK","ref_id":41209},{"sent":"blue UNK","ref_id":41351},{"sent":"right giraffe","ref_id":41359},{"sent":"giraffe on left","ref_id":41360},{"sent":"right bed","ref_id":41531},{"sent":"red bed","ref_id":41532},{"sent":"left giraffe","ref_id":41551},{"sent":"right giraffe","ref_id":41552},{"sent":"sheep in back","ref_id":41743},{"sent":"sheep in front","ref_id":41744},{"sent":"broccoli on left","ref_id":41795},{"sent":"broccoli on right","ref_id":41796},{"sent":"broccoli on right","ref_id":41797},{"sent":"broccoli on left","ref_id":41798},{"sent":"broccoli on left","ref_id":41799},{"sent":"bottom right food","ref_id":41805},{"sent":"left bowl","ref_id":41806},{"sent":"white bowl on right","ref_id":41807},{"sent":"right most food","ref_id":41808},{"sent":"bowl of food on left","ref_id":41809},{"sent":"sheep in back","ref_id":41877},{"sent":"right sheep","ref_id":41878},{"sent":"cat on right","ref_id":41938},{"sent":"cat on left","ref_id":41939},{"sent":"left horse","ref_id":42136},{"sent":"left horse","ref_id":42137},{"sent":"bird","ref_id":42284},{"sent":"bird","ref_id":42285},{"sent":"left bird","ref_id":42296},{"sent":"duck","ref_id":42297},{"sent":"red surfboard","ref_id":42329},{"sent":"white boat","ref_id":42330},{"sent":"bottom left toilet","ref_id":42354},{"sent":"white toilet","ref_id":42355},{"sent":"toilet on left","ref_id":42356},{"sent":"toilet on right","ref_id":42357},{"sent":"toilet on left","ref_id":42358},{"sent":"toilet on left","ref_id":42359},{"sent":"bananas on left","ref_id":42428},{"sent":"banana bunch","ref_id":42429},{"sent":"bowl of food on right","ref_id":42658},{"sent":"bowl of food","ref_id":42659},{"sent":"bowl of food","ref_id":42660},{"sent":"left donut","ref_id":42697},{"sent":"bottom right donut","ref_id":42698},{"sent":"train","ref_id":42839},{"sent":"left bird","ref_id":42922},{"sent":"bird on right","ref_id":42923},{"sent":"zebra on left","ref_id":42928},{"sent":"zebra on right","ref_id":42929},{"sent":"zebra in back","ref_id":42930},{"sent":"sandwich on top","ref_id":43160},{"sent":"top left sandwich","ref_id":43161},{"sent":"cake","ref_id":43210},{"sent":"bottom left suitcase","ref_id":43211},{"sent":"bird on the left","ref_id":43396},{"sent":"plate of food","ref_id":43446},{"sent":"top bowl","ref_id":43447},{"sent":"plate with lettuce","ref_id":43448},{"sent":"sandwich on right","ref_id":43449},{"sent":"right sandwich","ref_id":43480},{"sent":"sandwich on left","ref_id":43481},{"sent":"toilet","ref_id":43581},{"sent":"toilet","ref_id":43582},{"sent":"cat on right","ref_id":43596},{"sent":"black chair bottom right","ref_id":43597},{"sent":"giraffe on right","ref_id":43598},{"sent":"left giraffe","ref_id":43599},{"sent":"right sandwich","ref_id":43700},{"sent":"sandwich on left","ref_id":43701},{"sent":"red truck","ref_id":43784},{"sent":"truck","ref_id":43785},{"sent":"banana on right","ref_id":43815},{"sent":"right banana","ref_id":43816},{"sent":"bottom right corner","ref_id":43817},{"sent":"banana on right","ref_id":43818},{"sent":"middle banana","ref_id":43819},{"sent":"cow on left","ref_id":43940},{"sent":"top cow","ref_id":43941},{"sent":"black suitcase on right","ref_id":43987},{"sent":"black suitcase on right","ref_id":43988},{"sent":"red suitcase","ref_id":43989},{"sent":"black suitcase in middle","ref_id":43990},{"sent":"baby","ref_id":44146},{"sent":"teddy bear","ref_id":44147},{"sent":"bottom left food","ref_id":44160},{"sent":"right plate","ref_id":44161},{"sent":"bottom tray","ref_id":44162},{"sent":"top plate","ref_id":44163},{"sent":"right bed","ref_id":44228},{"sent":"bed on left","ref_id":44229},{"sent":"bottom left bowl","ref_id":44296},{"sent":"bottom banana","ref_id":44297},{"sent":"front giraffe","ref_id":44300},{"sent":"giraffe on right","ref_id":44301},{"sent":"plane in front","ref_id":44369},{"sent":"plane on left","ref_id":44370},{"sent":"plane in front","ref_id":44408},{"sent":"plane in front","ref_id":44409},{"sent":"top right cake","ref_id":44426},{"sent":"left half of sandwich","ref_id":44427},{"sent":"left sandwich","ref_id":44428},{"sent":"right half of sandwich","ref_id":44429},{"sent":"bottom left bread","ref_id":44430},{"sent":"bottom left cake","ref_id":44431},{"sent":"right half of sandwich","ref_id":44432},{"sent":"white toilet","ref_id":44454},{"sent":"left sheep","ref_id":44463},{"sent":"sheep in front","ref_id":44464},{"sent":"sandwich on right","ref_id":44467},{"sent":"left sandwich","ref_id":44468},{"sent":"right car","ref_id":44514},{"sent":"car on left","ref_id":44515},{"sent":"yellow cab","ref_id":44516},{"sent":"right toilet","ref_id":44573},{"sent":"toilet on left","ref_id":44574},{"sent":"bottom right bear","ref_id":44661},{"sent":"bear on left","ref_id":44662},{"sent":"bear on right","ref_id":44663},{"sent":"right meter","ref_id":44814},{"sent":"left meter","ref_id":44815},{"sent":"red sauce","ref_id":44856},{"sent":"glass on left","ref_id":44857},{"sent":"bear on right","ref_id":44884},{"sent":"brown bear","ref_id":44885},{"sent":"bear on left","ref_id":44886},{"sent":"bear on right","ref_id":44887},{"sent":"teddy bear on left","ref_id":44888},{"sent":"left meter","ref_id":44941},{"sent":"right meter","ref_id":44942},{"sent":"UNK","ref_id":44989},{"sent":"top right corner","ref_id":44990},{"sent":"bed","ref_id":45056},{"sent":"bottom left suitcase","ref_id":45057},{"sent":"blue umbrella","ref_id":45127},{"sent":"left blue vase","ref_id":45128},{"sent":"blue umbrella","ref_id":45129},{"sent":"left blue umbrella","ref_id":45130},{"sent":"bottom right corner","ref_id":45131},{"sent":"glass of the beer","ref_id":45256},{"sent":"right side of table","ref_id":45257},{"sent":"white bus","ref_id":45498},{"sent":"bus in middle","ref_id":45499},{"sent":"right bunch of bananas","ref_id":45503},{"sent":"bananas on right","ref_id":45504},{"sent":"bananas on the left","ref_id":45505},{"sent":"elephant on right","ref_id":45533},{"sent":"elephant on left","ref_id":45534},{"sent":"elephant in front","ref_id":45535},{"sent":"left zebra","ref_id":45645},{"sent":"right zebra","ref_id":45646},{"sent":"middle row","ref_id":45680},{"sent":"bottom right cake","ref_id":45681},{"sent":"white car in back","ref_id":45957},{"sent":"train","ref_id":45958},{"sent":"pizza","ref_id":45972},{"sent":"pizza slice","ref_id":45973},{"sent":"table in front","ref_id":45974},{"sent":"top right corner","ref_id":45975},{"sent":"sandwich on left","ref_id":45976},{"sent":"right half of sandwich","ref_id":45977},{"sent":"blue thing","ref_id":45984},{"sent":"blue thing","ref_id":45985},{"sent":"chair behind dog","ref_id":46007},{"sent":"chair on right","ref_id":46008},{"sent":"donut on right","ref_id":46423},{"sent":"bottom row second from left","ref_id":46439},{"sent":"bottom row second from left","ref_id":46440},{"sent":"second row from bottom right","ref_id":46441},{"sent":"right giraffe","ref_id":46476},{"sent":"giraffe on left","ref_id":46477},{"sent":"truck on right","ref_id":46501},{"sent":"white truck","ref_id":46502},{"sent":"white truck","ref_id":46503},{"sent":"right elephant","ref_id":46569},{"sent":"middle elephant","ref_id":46570},{"sent":"right zebra","ref_id":46668},{"sent":"zebra in front","ref_id":46669},{"sent":"top pizza","ref_id":46684},{"sent":"pizza","ref_id":46685},{"sent":"car in front","ref_id":46724},{"sent":"car in front","ref_id":46725},{"sent":"sheep in front","ref_id":46744},{"sent":"bottom right bowl","ref_id":46773},{"sent":"bowl of food on left","ref_id":46774},{"sent":"middle bowl","ref_id":46775},{"sent":"left pizza","ref_id":46796},{"sent":"pizza on right","ref_id":46797},{"sent":"pizza on the right","ref_id":46798},{"sent":"bear on left","ref_id":46817},{"sent":"right bear","ref_id":46818},{"sent":"right plant","ref_id":46965},{"sent":"left UNK","ref_id":46966},{"sent":"left suitcase","ref_id":46986},{"sent":"right suitcase","ref_id":46987},{"sent":"right suitcase","ref_id":46988},{"sent":"right suitcase","ref_id":46989},{"sent":"suitcase in middle","ref_id":46990},{"sent":"top carrot","ref_id":47273},{"sent":"top carrot","ref_id":47274},{"sent":"top carrot","ref_id":47275},{"sent":"top carrot","ref_id":47276},{"sent":"bottom left carrot","ref_id":47277},{"sent":"right giraffe","ref_id":47305},{"sent":"left giraffe","ref_id":47306},{"sent":"bowl of food","ref_id":47310},{"sent":"bowl of rice","ref_id":47311},{"sent":"top right bowl","ref_id":47312},{"sent":"train on right","ref_id":47313},{"sent":"train on right","ref_id":47314},{"sent":"train on left","ref_id":47315},{"sent":"chair on left","ref_id":47318},{"sent":"right giraffe","ref_id":47366},{"sent":"left zebra","ref_id":47367},{"sent":"white cow","ref_id":47450},{"sent":"big cow","ref_id":47451},{"sent":"white sheep on right","ref_id":47452},{"sent":"white sheep on right","ref_id":47453},{"sent":"top right sheep","ref_id":47454},{"sent":"white cow","ref_id":47455},{"sent":"top left suitcase","ref_id":47529},{"sent":"white boat on left","ref_id":47530},{"sent":"white boat","ref_id":47531},{"sent":"left giraffe","ref_id":47603},{"sent":"right giraffe","ref_id":47604},{"sent":"white plate","ref_id":47644},{"sent":"top left food","ref_id":47645},{"sent":"bear on right","ref_id":47740},{"sent":"bear on right","ref_id":47741},{"sent":"bear in middle","ref_id":47742},{"sent":"bear on right","ref_id":47743},{"sent":"bed on left","ref_id":47840},{"sent":"bed","ref_id":47841},{"sent":"black dog","ref_id":47875},{"sent":"dog on right","ref_id":47876},{"sent":"giraffe in front","ref_id":47931},{"sent":"giraffe on left","ref_id":47932},{"sent":"left person","ref_id":47957},{"sent":"giraffe on left","ref_id":48055},{"sent":"giraffe","ref_id":48056},{"sent":"bowl of food","ref_id":48175},{"sent":"right bowl","ref_id":48176},{"sent":"zebra in front","ref_id":48302},{"sent":"right zebra","ref_id":48303},{"sent":"bottom left food","ref_id":48441},{"sent":"left piece of food","ref_id":48442},{"sent":"top right food","ref_id":48443},{"sent":"right food","ref_id":48444},{"sent":"left vase","ref_id":48545},{"sent":"right vase","ref_id":48546},{"sent":"second bike from right","ref_id":48585},{"sent":"second from right","ref_id":48586},{"sent":"second from left","ref_id":48587},{"sent":"left bike","ref_id":48588},{"sent":"second bike from right","ref_id":48589},{"sent":"right half of sandwich","ref_id":48623},{"sent":"left sandwich","ref_id":48624},{"sent":"right sandwich","ref_id":48625},{"sent":"bottom right chair","ref_id":48681},{"sent":"right couch","ref_id":48682},{"sent":"bottom left corner","ref_id":48683},{"sent":"couch on right","ref_id":48684},{"sent":"couch","ref_id":48685},{"sent":"left couch","ref_id":48686},{"sent":"left cake","ref_id":48861},{"sent":"right cake","ref_id":48862},{"sent":"left elephant","ref_id":48865},{"sent":"baby elephant","ref_id":48866},{"sent":"right elephant","ref_id":48888},{"sent":"left elephant","ref_id":48889},{"sent":"table in front","ref_id":49111},{"sent":"right truck","ref_id":49165},{"sent":"truck","ref_id":49166},{"sent":"truck","ref_id":49167},{"sent":"right truck","ref_id":49168},{"sent":"right bike","ref_id":49248},{"sent":"right bike","ref_id":49249},{"sent":"right bike","ref_id":49250},{"sent":"donut in middle","ref_id":49288},{"sent":"donut on left","ref_id":49289},{"sent":"donut in middle","ref_id":49290},{"sent":"giraffe on left","ref_id":49291},{"sent":"left giraffe","ref_id":49292},{"sent":"giraffe in front","ref_id":49293},{"sent":"left cup","ref_id":49377},{"sent":"right cup","ref_id":49378},{"sent":"left bottle","ref_id":49429},{"sent":"bottle on right","ref_id":49430},{"sent":"pizza","ref_id":49446},{"sent":"pizza slice on right","ref_id":49447},{"sent":"pizza slice on right","ref_id":49448},{"sent":"broccoli on the right","ref_id":49455},{"sent":"broccoli in the middle","ref_id":49456},{"sent":"second board from right","ref_id":49502},{"sent":"blue board","ref_id":49503},{"sent":"right plant","ref_id":49583},{"sent":"left plant","ref_id":49584},{"sent":"black suitcase","ref_id":49672},{"sent":"blue tie","ref_id":49673},{"sent":"middle bus","ref_id":49701},{"sent":"second bus from right","ref_id":49702},{"sent":"right bus","ref_id":49703},{"sent":"right suitcase","ref_id":49721},{"sent":"right side of pizza","ref_id":49781},{"sent":"right slice","ref_id":49782},{"sent":"dog on right","ref_id":49818},{"sent":"left dog","ref_id":49819},{"sent":"left car","ref_id":49824},{"sent":"white car","ref_id":49825},{"sent":"right cup","ref_id":49949},{"sent":"right cup","ref_id":49950},{"sent":"zebra in back","ref_id":49986},{"sent":"zebra on left","ref_id":49987},{"sent":"car on left","ref_id":25},{"sent":"car on left","ref_id":26},{"sent":"top sandwich","ref_id":27},{"sent":"top left donut","ref_id":28},{"sent":"zebra on left","ref_id":45},{"sent":"right zebra","ref_id":46},{"sent":"chair in front of man","ref_id":164},{"sent":"bottom right corner","ref_id":165},{"sent":"left chair","ref_id":166},{"sent":"top right corner","ref_id":232},{"sent":"pizza in front","ref_id":233},{"sent":"glass in back","ref_id":234},{"sent":"left glass","ref_id":235},{"sent":"yellow fruit on left","ref_id":259}]}
================================================
FILE: requirements.txt
================================================
requests
filelock
tqdm
timm
mmcv-full==1.3.12
mmsegmentation==0.17.0
ftfy
regex
scipy
scikit-image
pycocotools==2.0.2
opencv-python==4.5.3.56
tokenizers==0.8.1rc1
h5py
================================================
FILE: test.py
================================================
import datetime
import os
import time
import torch
import torch.utils.data
from torch import nn
from bert.modeling_bert import BertModel
import torchvision
from lib import segmentation
import transforms as T
import utils
import numpy as np
from PIL import Image
import torch.nn.functional as F
def get_dataset(image_set, transform, args):
from data.dataset_refer_bert import ReferDataset
ds = ReferDataset(args,
split=image_set,
image_transforms=transform,
target_transforms=None,
eval_mode=True
)
num_classes = 2
return ds, num_classes
def evaluate(model, data_loader, bert_model, device):
model.eval()
metric_logger = utils.MetricLogger(delimiter=" ")
# evaluation variables
cum_I, cum_U = 0, 0
eval_seg_iou_list = [.5, .6, .7, .8, .9]
seg_correct = np.zeros(len(eval_seg_iou_list), dtype=np.int32)
seg_total = 0
mean_IoU = []
header = 'Test:'
with torch.no_grad():
for data in metric_logger.log_every(data_loader, 100, header):
image, target, sentences, attentions = data
image, target, sentences, attentions = image.to(device), target.to(device), \
sentences.to(device), attentions.to(device)
sentences = sentences.squeeze(1)
attentions = attentions.squeeze(1)
target = target.cpu().data.numpy()
for j in range(sentences.size(-1)):
if bert_model is not None:
last_hidden_states = bert_model(sentences[:, :, j], attention_mask=attentions[:, :, j])[0]
embedding = last_hidden_states.permute(0, 2, 1)
output = model(image, embedding, l_mask=attentions[:, :, j].unsqueeze(-1))
else:
output = model(image, sentences[:, :, j], l_mask=attentions[:, :, j])
output = output.cpu()
output_mask = output.argmax(1).data.numpy()
I, U = computeIoU(output_mask, target)
if U == 0:
this_iou = 0.0
else:
this_iou = I*1.0/U
mean_IoU.append(this_iou)
cum_I += I
cum_U += U
for n_eval_iou in range(len(eval_seg_iou_list)):
eval_seg_iou = eval_seg_iou_list[n_eval_iou]
seg_correct[n_eval_iou] += (this_iou >= eval_seg_iou)
seg_total += 1
del image, target, sentences, attentions, output, output_mask
if bert_model is not None:
del last_hidden_states, embedding
mean_IoU = np.array(mean_IoU)
mIoU = np.mean(mean_IoU)
print('Final results:')
print('Mean IoU is %.2f\n' % (mIoU*100.))
results_str = ''
for n_eval_iou in range(len(eval_seg_iou_list)):
results_str += ' precision@%s = %.2f\n' % \
(str(eval_seg_iou_list[n_eval_iou]), seg_correct[n_eval_iou] * 100. / seg_total)
results_str += ' overall IoU = %.2f\n' % (cum_I * 100. / cum_U)
print(results_str)
def get_transform(args):
transforms = [T.Resize(args.img_size, args.img_size),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
return T.Compose(transforms)
def computeIoU(pred_seg, gd_seg):
I = np.sum(np.logical_and(pred_seg, gd_seg))
U = np.sum(np.logical_or(pred_seg, gd_seg))
return I, U
def main(args):
device = torch.device(args.device)
dataset_test, _ = get_dataset(args.split, get_transform(args=args), args)
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1,
sampler=test_sampler, num_workers=args.workers)
print(args.model)
single_model = segmentation.__dict__[args.model](pretrained='',args=args)
checkpoint = torch.load(args.resume, map_location='cpu')
single_model.load_state_dict(checkpoint['model'])
model = single_model.to(device)
if args.model != 'lavt_one':
model_class = BertModel
single_bert_model = model_class.from_pretrained(args.ck_bert)
# work-around for a transformers bug; need to update to a newer version of transformers to remove these two lines
if args.ddp_trained_weights:
single_bert_model.pooler = None
single_bert_model.load_state_dict(checkpoint['bert_model'])
bert_model = single_bert_model.to(device)
else:
bert_model = None
evaluate(model, data_loader_test, bert_model, device=device)
if __name__ == "__main__":
from args import get_parser
parser = get_parser()
args = parser.parse_args()
print('Image size: {}'.format(str(args.img_size)))
main(args)
================================================
FILE: train.py
================================================
import datetime
import os
import time
import torch
import torch.utils.data
from torch import nn
from functools import reduce
import operator
from bert.modeling_bert import BertModel
import torchvision
from lib import segmentation
import transforms as T
import utils
import numpy as np
import torch.nn.functional as F
import gc
from collections import OrderedDict
def get_dataset(image_set, transform, args):
from data.dataset_refer_bert import ReferDataset
ds = ReferDataset(args,
split=image_set,
image_transforms=transform,
target_transforms=None
)
num_classes = 2
return ds, num_classes
# IoU calculation for validation
def IoU(pred, gt):
pred = pred.argmax(1)
intersection = torch.sum(torch.mul(pred, gt))
union = torch.sum(torch.add(pred, gt)) - intersection
if intersection == 0 or union == 0:
iou = 0
else:
iou = float(intersection) / float(union)
return iou, intersection, union
def get_transform(args):
transforms = [T.Resize(args.img_size, args.img_size),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
return T.Compose(transforms)
def criterion(input, target):
weight = torch.FloatTensor([0.9, 1.1]).cuda()
return nn.functional.cross_entropy(input, target, weight=weight)
def evaluate(model, data_loader, bert_model):
model.eval()
metric_logger = utils.MetricLogger(delimiter=" ")
header = 'Test:'
total_its = 0
acc_ious = 0
# evaluation variables
cum_I, cum_U = 0, 0
eval_seg_iou_list = [.5, .6, .7, .8, .9]
seg_correct = np.zeros(len(eval_seg_iou_list), dtype=np.int32)
seg_total = 0
mean_IoU = []
with torch.no_grad():
for data in metric_logger.log_every(data_loader, 100, header):
total_its += 1
image, target, sentences, attentions = data
image, target, sentences, attentions = image.cuda(non_blocking=True),\
target.cuda(non_blocking=True),\
sentences.cuda(non_blocking=True),\
attentions.cuda(non_blocking=True)
sentences = sentences.squeeze(1)
attentions = attentions.squeeze(1)
if bert_model is not None:
last_hidden_states = bert_model(sentences, attention_mask=attentions)[0]
embedding = last_hidden_states.permute(0, 2, 1) # (B, 768, N_l) to make Conv1d happy
attentions = attentions.unsqueeze(dim=-1) # (B, N_l, 1)
output = model(image, embedding, l_mask=attentions)
else:
output = model(image, sentences, l_mask=attentions)
iou, I, U = IoU(output, target)
acc_ious += iou
mean_IoU.append(iou)
cum_I += I
cum_U += U
for n_eval_iou in range(len(eval_seg_iou_list)):
eval_seg_iou = eval_seg_iou_list[n_eval_iou]
seg_correct[n_eval_iou] += (iou >= eval_seg_iou)
seg_total += 1
iou = acc_ious / total_its
mean_IoU = np.array(mean_IoU)
mIoU = np.mean(mean_IoU)
print('Final results:')
print('Mean IoU is %.2f\n' % (mIoU * 100.))
results_str = ''
for n_eval_iou in range(len(eval_seg_iou_list)):
results_str += ' precision@%s = %.2f\n' % \
(str(eval_seg_iou_list[n_eval_iou]), seg_correct[n_eval_iou] * 100. / seg_total)
results_str += ' overall IoU = %.2f\n' % (cum_I * 100. / cum_U)
print(results_str)
return 100 * iou, 100 * cum_I / cum_U
def train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, epoch, print_freq,
iterations, bert_model):
model.train()
metric_logger = utils.MetricLogger(delimiter=" ")
metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value}'))
header = 'Epoch: [{}]'.format(epoch)
train_loss = 0
total_its = 0
for data in metric_logger.log_every(data_loader, print_freq, header):
total_its += 1
image, target, sentences, attentions = data
image, target, sentences, attentions = image.cuda(non_blocking=True),\
target.cuda(non_blocking=True),\
sentences.cuda(non_blocking=True),\
attentions.cuda(non_blocking=True)
sentences = sentences.squeeze(1)
attentions = attentions.squeeze(1)
if bert_model is not None:
last_hidden_states = bert_model(sentences, attention_mask=attentions)[0] # (6, 10, 768)
embedding = last_hidden_states.permute(0, 2, 1) # (B, 768, N_l) to make Conv1d happy
attentions = attentions.unsqueeze(dim=-1) # (batch, N_l, 1)
output = model(image, embedding, l_mask=attentions)
else:
output = model(image, sentences, l_mask=attentions)
loss = criterion(output, target)
optimizer.zero_grad() # set_to_none=True is only available in pytorch 1.6+
loss.backward()
optimizer.step()
lr_scheduler.step()
torch.cuda.synchronize()
train_loss += loss.item()
iterations += 1
metric_logger.update(loss=loss.item(), lr=optimizer.param_groups[0]["lr"])
del image, target, sentences, attentions, loss, output, data
if bert_model is not None:
del last_hidden_states, embedding
gc.collect()
torch.cuda.empty_cache()
torch.cuda.synchronize()
def main(args):
dataset, num_classes = get_dataset("train",
get_transform(args=args),
args=args)
dataset_test, _ = get_dataset("val",
get_transform(args=args),
args=args)
# batch sampler
print(f"local rank {args.local_rank} / global rank {utils.get_rank()} successfully built train dataset.")
num_tasks = utils.get_world_size()
global_rank = utils.get_rank()
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=num_tasks, rank=global_rank,
shuffle=True)
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
# data loader
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=args.batch_size,
sampler=train_sampler, num_workers=args.workers, pin_memory=args.pin_mem, drop_last=True)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, sampler=test_sampler, num_workers=args.workers)
# model initialization
print(args.model)
model = segmentation.__dict__[args.model](pretrained=args.pretrained_swin_weights,
args=args)
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
model.cuda()
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], find_unused_parameters=True)
single_model = model.module
if args.model != 'lavt_one':
model_class = BertModel
bert_model = model_class.from_pretrained(args.ck_bert)
bert_model.pooler = None # a work-around for a bug in Transformers = 3.0.2 that appears for DistributedDataParallel
bert_model.cuda()
bert_model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(bert_model)
bert_model = torch.nn.parallel.DistributedDataParallel(bert_model, device_ids=[args.local_rank])
single_bert_model = bert_model.module
else:
bert_model = None
single_bert_model = None
# resume training
if args.resume:
checkpoint = torch.load(args.resume, map_location='cpu')
single_model.load_state_dict(checkpoint['model'])
if args.model != 'lavt_one':
single_bert_model.load_state_dict(checkpoint['bert_model'])
# parameters to optimize
backbone_no_decay = list()
backbone_decay = list()
for name, m in single_model.backbone.named_parameters():
if 'norm' in name or 'absolute_pos_embed' in name or 'relative_position_bias_table' in name:
backbone_no_decay.append(m)
else:
backbone_decay.append(m)
if args.model != 'lavt_one':
params_to_optimize = [
{'params': backbone_no_decay, 'weight_decay': 0.0},
{'params': backbone_decay},
{"params": [p for p in single_model.classifier.parameters() if p.requires_grad]},
# the following are the parameters of bert
{"params": reduce(operator.concat,
[[p for p in single_bert_model.encoder.layer[i].parameters()
if p.requires_grad] for i in range(10)])},
]
else:
params_to_optimize = [
{'params': backbone_no_decay, 'weight_decay': 0.0},
{'params': backbone_decay},
{"params": [p for p in single_model.classifier.parameters() if p.requires_grad]},
# the following are the parameters of bert
{"params": reduce(operator.concat,
[[p for p in single_model.text_encoder.encoder.layer[i].parameters()
if p.requires_grad] for i in range(10)])},
]
# optimizer
optimizer = torch.optim.AdamW(params_to_optimize,
lr=args.lr,
weight_decay=args.weight_decay,
amsgrad=args.amsgrad
)
# learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer,
lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)
# housekeeping
start_time = time.time()
iterations = 0
best_oIoU = -0.1
# resume training (optimizer, lr scheduler, and the epoch)
if args.resume:
optimizer.load_state_dict(checkpoint['optimizer'])
lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
resume_epoch = checkpoint['epoch']
else:
resume_epoch = -999
# training loops
for epoch in range(max(0, resume_epoch+1), args.epochs):
data_loader.sampler.set_epoch(epoch)
train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, epoch, args.print_freq,
iterations, bert_model)
iou, overallIoU = evaluate(model, data_loader_test, bert_model)
print('Average object IoU {}'.format(iou))
print('Overall IoU {}'.format(overallIoU))
save_checkpoint = (best_oIoU < overallIoU)
if save_checkpoint:
print('Better epoch: {}\n'.format(epoch))
if single_bert_model is not None:
dict_to_save = {'model': single_model.state_dict(), 'bert_model': single_bert_model.state_dict(),
'optimizer': optimizer.state_dict(), 'epoch': epoch, 'args': args,
'lr_scheduler': lr_scheduler.state_dict()}
else:
dict_to_save = {'model': single_model.state_dict(),
'optimizer': optimizer.state_dict(), 'epoch': epoch, 'args': args,
'lr_scheduler': lr_scheduler.state_dict()}
utils.save_on_master(dict_to_save, os.path.join(args.output_dir,
'model_best_{}.pth'.format(args.model_id)))
best_oIoU = overallIoU
# summarize
total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
print('Training time {}'.format(total_time_str))
if __name__ == "__main__":
from args import get_parser
parser = get_parser()
args = parser.parse_args()
# set up distributed learning
utils.init_distributed_mode(args)
print('Image size: {}'.format(str(args.img_size)))
main(args)
================================================
FILE: transforms.py
================================================
import numpy as np
from PIL import Image
import random
import torch
from torchvision import transforms as T
from torchvision.transforms import functional as F
def pad_if_smaller(img, size, fill=0):
min_size = min(img.size)
if min_size < size:
ow, oh = img.size
padh = size - oh if oh < size else 0
padw = size - ow if ow < size else 0
img = F.pad(img, (0, 0, padw, padh), fill=fill)
return img
class Compose(object):
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, image, target):
for t in self.transforms:
image, target = t(image, target)
return image, target
class Resize(object):
def __init__(self, h, w):
self.h = h
self.w = w
def __call__(self, image, target):
image = F.resize(image, (self.h, self.w))
# If size is a sequence like (h, w), the output size will be matched to this.
# If size is an int, the smaller edge of the image will be matched to this number maintaining the aspect ratio
target = F.resize(target, (self.h, self.w), interpolation=Image.NEAREST)
return image, target
class RandomResize(object):
def __init__(self, min_size, max_size=None):
self.min_size = min_size
if max_size is None:
max_size = min_size
self.max_size = max_size
def __call__(self, image, target):
size = random.randint(self.min_size, self.max_size) # Return a random integer N such that a <= N <= b. Alias for randrange(a, b+1)
image = F.resize(image, size)
# If size is a sequence like (h, w), the output size will be matched to this.
# If size is an int, the smaller edge of the image will be matched to this number maintaining the aspect ratio
target = F.resize(target, size, interpolation=Image.NEAREST)
return image, target
class RandomHorizontalFlip(object):
def __init__(self, flip_prob):
self.flip_prob = flip_prob
def __call__(self, image, target):
if random.random() < self.flip_prob:
image = F.hflip(image)
target = F.hflip(target)
return image, target
class RandomCrop(object):
def __init__(self, size):
self.size = size
def __call__(self, image, target):
image = pad_if_smaller(image, self.size)
target = pad_if_smaller(target, self.size, fill=255)
crop_params = T.RandomCrop.get_params(image, (self.size, self.size))
image = F.crop(image, *crop_params)
target = F.crop(target, *crop_params)
return image, target
class CenterCrop(object):
def __init__(self, size):
self.size = size
def __call__(self, image, target):
image = F.center_crop(image, self.size)
target = F.center_crop(target, self.size)
return image, target
class ToTensor(object):
def __call__(self, image, target):
image = F.to_tensor(image)
target = torch.as_tensor(np.asarray(target).copy(), dtype=torch.int64)
return image, target
class RandomAffine(object):
def __init__(self, angle, translate, scale, shear, resample=0, fillcolor=None):
self.angle = angle
self.translate = translate
self.scale = scale
self.shear = shear
self.resample = resample
self.fillcolor = fillcolor
def __call__(self, image, target):
affine_params = T.RandomAffine.get_params(self.angle, self.translate, self.scale, self.shear, image.size)
image = F.affine(image, *affine_params)
target = F.affine(target, *affine_params)
return image, target
class Normalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, image, target):
image = F.normalize(image, mean=self.mean, std=self.std)
return image, target
================================================
FILE: utils.py
================================================
from __future__ import print_function
from collections import defaultdict, deque
import datetime
import math
import time
import torch
import torch.distributed as dist
import torch.backends.cudnn as cudnn
import errno
import os
import sys
class SmoothedValue(object):
"""Track a series of values and provide access to smoothed values over a
window or the global series average.
"""
def __init__(self, window_size=20, fmt=None):
if fmt is None:
fmt = "{median:.4f} ({global_avg:.4f})"
self.deque = deque(maxlen=window_size)
self.total = 0.0
self.count = 0
self.fmt = fmt
def update(self, value, n=1):
self.deque.append(value)
self.count += n
self.total += value * n
def synchronize_between_processes(self):
"""
Warning: does not synchronize the deque!
"""
if not is_dist_avail_and_initialized():
return
t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
dist.barrier()
dist.all_reduce(t)
t = t.tolist()
self.count = int(t[0])
self.total = t[1]
@property
def median(self):
d = torch.tensor(list(self.deque))
return d.median().item()
@property
def avg(self):
d = torch.tensor(list(self.deque), dtype=torch.float32)
return d.mean().item()
@property
def global_avg(self):
return self.total / self.count
@property
def max(self):
return max(self.deque)
@property
def value(self):
return self.deque[-1]
def __str__(self):
return self.fmt.format(
median=self.median,
avg=self.avg,
global_avg=self.global_avg,
max=self.max,
value=self.value)
class MetricLogger(object):
def __init__(self, delimiter="\t"):
self.meters = defaultdict(SmoothedValue)
self.delimiter = delimiter
def update(self, **kwargs):
for k, v in kwargs.items():
if isinstance(v, torch.Tensor):
v = v.item()
assert isinstance(v, (float, int))
self.meters[k].update(v)
def __getattr__(self, attr):
if attr in self.meters:
return self.meters[attr]
if attr in self.__dict__:
return self.__dict__[attr]
raise AttributeError("'{}' object has no attribute '{}'".format(
type(self).__name__, attr))
def __str__(self):
loss_str = []
for name, meter in self.meters.items():
loss_str.append(
"{}: {}".format(name, str(meter))
)
return self.delimiter.join(loss_str)
def synchronize_between_processes(self):
for meter in self.meters.values():
meter.synchronize_between_processes()
def add_meter(self, name, meter):
self.meters[name] = meter
def log_every(self, iterable, print_freq, header=None):
i = 0
if not header:
header = ''
start_time = time.time()
end = time.time()
iter_time = SmoothedValue(fmt='{avg:.4f}')
data_time = SmoothedValue(fmt='{avg:.4f}')
space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
log_msg = self.delimiter.join([
header,
'[{0' + space_fmt + '}/{1}]',
'eta: {eta}',
'{meters}',
'time: {time}',
'data: {data}',
'max mem: {memory:.0f}'
])
MB = 1024.0 * 1024.0
for obj in iterable:
data_time.update(time.time() - end)
yield obj
iter_time.update(time.time() - end)
if i % print_freq == 0:
eta_seconds = iter_time.global_avg * (len(iterable) - i)
eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
print(log_msg.format(
i, len(iterable), eta=eta_string,
meters=str(self),
time=str(iter_time), data=str(data_time),
memory=torch.cuda.max_memory_allocated() / MB))
sys.stdout.flush()
i += 1
end = time.time()
total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
print('{} Total time: {}'.format(header, total_time_str))
def mkdir(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
def setup_for_distributed(is_master):
"""
This function disables printing when not in master process
"""
import builtins as __builtin__
builtin_print = __builtin__.print
def print(*args, **kwargs):
force = kwargs.pop('force', False)
if is_master or force:
builtin_print(*args, **kwargs)
__builtin__.print = print
def is_dist_avail_and_initialized():
if not dist.is_available():
return False
if not dist.is_initialized():
return False
return True
def get_world_size():
if not is_dist_avail_and_initialized():
return 1
return dist.get_world_size()
def get_rank():
if not is_dist_avail_and_initialized():
return 0
return dist.get_rank()
def is_main_process():
return get_rank() == 0
def save_on_master(*args, **kwargs):
if is_main_process():
torch.save(*args, **kwargs)
def init_distributed_mode(args):
if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
rank = int(os.environ["RANK"])
world_size = int(os.environ['WORLD_SIZE'])
print(f"RANK and WORLD_SIZE in environment: {rank}/{world_size}")
else:
rank = -1
world_size = -1
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
torch.distributed.barrier()
setup_for_distributed(is_main_process())
if args.output_dir:
mkdir(args.output_dir)
if args.model_id:
mkdir(os.path.join('./models/', args.model_id))