Full Code of kapadias/mediumposts for AI

master 749c93d1e637 cached
26 files
879.1 KB
394.7k tokens
4 symbols
1 requests
Download .txt
Showing preview only (907K chars total). Download the full file or copy to clipboard to get everything.
Repository: kapadias/mediumposts
Branch: master
Commit: 749c93d1e637
Files: 26
Total size: 879.1 KB

Directory structure:
gitextract_kymjsk4w/

├── LICENSE
├── general-data-science/
│   └── similarities-measures/
│       ├── pyproject.toml
│       └── similarity-measures.ipynb
├── natural-language-processing/
│   ├── embedding-models/
│   │   ├── data/
│   │   │   └── training_data.csv
│   │   ├── domain_adaption_fine_tune_nlp_model.ipynb
│   │   ├── logs/
│   │   │   └── fit/
│   │   │       └── 20230630-152712/
│   │   │           ├── train/
│   │   │           │   └── events.out.tfevents.1688153232.Shashanks-MacBook-Pro-2.local.44453.0.v2
│   │   │           └── validation/
│   │   │               └── events.out.tfevents.1688153249.Shashanks-MacBook-Pro-2.local.44453.1.v2
│   │   ├── packages/
│   │   │   └── tensorflow_text-2.10.0-cp39-cp39-macosx_11_0_arm64.whl
│   │   ├── pyproject.toml
│   │   └── utils.py
│   ├── text-processing/
│   │   ├── Building Blocks Text Pre-Processing.ipynb
│   │   └── pyproject.toml
│   ├── topic-modeling/
│   │   ├── Evaluate Topic Models.ipynb
│   │   ├── Introduction to Topic Modeling.ipynb
│   │   ├── pyproject.toml
│   │   └── results/
│   │       ├── lda_tuning_results.csv
│   │       ├── ldavis_prepared_10
│   │       ├── ldavis_prepared_10.html
│   │       ├── ldavis_tuned_8
│   │       └── ldavis_tuned_8.html
│   └── transformers-series/
│       ├── pyproject.toml
│       └── sentiment_analysis_bert.ipynb
└── recommender/
    ├── published_notebooks/
    │   └── recommendation_python_lightfm.ipynb
    └── results/
        ├── profiler_books_metadata_1.html
        ├── profiler_books_metadata_2.html
        └── profiler_interactions.html

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.


================================================
FILE: general-data-science/similarities-measures/pyproject.toml
================================================
[tool.poetry]
name = "similarities-measures"
version = "0.1.0"
description = ""
authors = ["Shashank Kapadia <shashank.kapadia@randstadusa.com>"]
readme = "README.md"
packages = [{include = "similarities_measures"}]

[tool.poetry.dependencies]
python = ">=3.9,<3.10"
jupyterlab = "^3.5.2"
scikit-learn = "^1.2.0"
matplotlib = "^3.6.2"
seaborn = "^0.12.1"


[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"


================================================
FILE: general-data-science/similarities-measures/similarity-measures.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1d654a7e-fe50-495a-9235-303b16100d51",
   "metadata": {},
   "source": [
    "## Comparing 5 Data Similarity Measures\n",
    "##### Understanding Similarity Measures in Data Analysis and Machine Learning: A Comprehensive Guide\n",
    "** **\n",
    "*Preface: This article presents a summary of information about the given topic. It should not be considered original research. The information and code included in this article have may be influenced by things I have read or seen in the past from various online articles, research papers, books, and open-source code.*\n",
    "\n",
    "#### Introduction\n",
    "Similarity measures are a vital tool in many data analysis and machine learning tasks, allowing us to compare and evaluate the similarity between different pieces of data. Many different measures are available, each with pros and cons and suitable for different data types and tasks. \n",
    "\n",
    "This article will explore some of the most common similarity measures and compare their strengths and weaknesses. By understanding the characteristics and limitations of these measures, we can choose the most appropriate one for our specific needs and ensure the accuracy and relevance of our results.\n",
    "\n",
    "** **\n",
    "1. #### Euclidean Distance\n",
    "\n",
    "This measure calculates the straight-line distance between two points in n-dimensional space. It is often used for continuous numerical data and is easy to understand and implement. However, it can be sensitive to outliers and does not account for the relative importance of different features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "662570b5-f57b-426f-9e4e-7c7ed13ee897",
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.spatial import distance\n",
    "\n",
    "# Calculate Euclidean distance between two points\n",
    "point1 = [1, 2, 3]\n",
    "point2 = [4, 5, 6]\n",
    "\n",
    "# Use the euclidean function from scipy's distance module to calculate the Euclidean distance\n",
    "euclidean_distance = distance.euclidean(point1, point2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "251a9a8b-1399-43b6-8d5d-312eaa9d87c1",
   "metadata": {},
   "source": [
    "#### Manhattan Distance\n",
    "\n",
    "This measure calculates the distance between two points by considering the absolute differences of their coordinates in each dimension and summing them. It is less sensitive to outliers than Euclidean distance, but it may not accurately reflect the actual distance between points in some cases."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9a535f95-56a2-4960-880f-dc5d8c003949",
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.spatial import distance\n",
    "\n",
    "# Calculate Manhattan distance between two points\n",
    "point1 = [1, 2, 3]\n",
    "point2 = [4, 5, 6]\n",
    "\n",
    "# Use the cityblock function from scipy's distance module to calculate the Manhattan distance\n",
    "manhattan_distance = distance.cityblock(point1, point2)\n",
    "\n",
    "# Print the result\n",
    "print(\"Manhattan Distance between the given two points: \" + \\\n",
    "      str(manhattan_distance))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20eee091-e3f0-4b67-87e6-165d5c36f0dd",
   "metadata": {},
   "source": [
    "#### Cosine Similarity\n",
    "\n",
    "This measure calculates the similarity between two vectors by considering their angle. It is often used for text data and is resistant to changes in the magnitude of the vectors. However, it does not consider the relative importance of different features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fb9fa0b3-86a1-4079-99f7-33891f93f0b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "\n",
    "# Calculate cosine similarity between two vectors\n",
    "vector1 = [1, 2, 3]\n",
    "vector2 = [4, 5, 6]\n",
    "\n",
    "# Use the cosine_similarity function from scikit-learn to calculate the similarity\n",
    "cosine_sim = cosine_similarity([vector1], [vector2])[0][0]\n",
    "\n",
    "# Print the result\n",
    "print(\"Cosine Similarity between the given two vectors: \" + \\\n",
    "      str(cosine_sim))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22288d67-7006-4025-9185-491f3100dfe0",
   "metadata": {},
   "source": [
    "#### Jaccard Similarity\n",
    "\n",
    "This measure calculates the similarity between two sets by considering the size of their intersection and union. It is often used for categorical data and is resistant to changes in the size of the sets. However, it does not consider the sets' order or frequency of elements."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d3339eb-60cb-4bab-8ca4-3ffc40ead500",
   "metadata": {},
   "outputs": [],
   "source": [
    "def jaccard_similarity(list1, list2):\n",
    "    \"\"\"\n",
    "    Calculates the Jaccard similarity between two lists.\n",
    "    \n",
    "    Parameters:\n",
    "    list1 (list): The first list to compare.\n",
    "    list2 (list): The second list to compare.\n",
    "    \n",
    "    Returns:\n",
    "    float: The Jaccard similarity between the two lists.\n",
    "    \"\"\"\n",
    "    # Convert the lists to sets for easier comparison\n",
    "    s1 = set(list1)\n",
    "    s2 = set(list2)\n",
    "    \n",
    "    # Calculate the Jaccard similarity by taking the length of the intersection of the sets\n",
    "    # and dividing it by the length of the union of the sets\n",
    "    return float(len(s1.intersection(s2)) / len(s1.union(s2)))\n",
    "\n",
    "# Calculate Jaccard similarity between two sets\n",
    "set1 = [1, 2, 3]\n",
    "set2 = [2, 3, 4]\n",
    "jaccard_sim = jaccard_similarity(set1, set2)\n",
    "\n",
    "# Print the result\n",
    "print(\"Jaccard Similarity between the given two sets: \" + \\\n",
    "      str(jaccard_sim))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c0bff22-35f0-4a3f-bb1b-dab41fa87843",
   "metadata": {},
   "source": [
    "#### Pearson Correlation Coefficient\n",
    "\n",
    "This measure calculates the linear correlation between two variables. It is often used for continuous numerical data and considers the relative importance of different features. However, it may not accurately reflect non-linear relationships."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66c09410-c3fd-46cb-bcc6-8adf715105b7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "# Calculate Pearson correlation coefficient between two variables\n",
    "x = [1, 2, 3, 4]\n",
    "y = [2, 3, 4, 5]\n",
    "\n",
    "# Numpy corrcoef function to calculate the Pearson correlation coefficient and p-value\n",
    "pearson_corr = np.corrcoef(x, y)[0][1]\n",
    "\n",
    "# Print the result\n",
    "print(\"Pearson Correlation between the given two variables: \" + \\\n",
    "      str(pearson_corr))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a5e24a8-b8cc-41fc-a582-b5134d55f07b",
   "metadata": {},
   "source": [
    "** **\n",
    "### Practical Scenario\n",
    "\n",
    "Suppose we have 5 items with numerical attributes and we want to compare the similarities between these products in order to facilitate applications such as clustering, classification, or perhaps, recommendations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bdec85a6-0f50-413d-857a-6b90c5bb8b04",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import seaborn as sns\n",
    "import random\n",
    "import matplotlib.pyplot as plt\n",
    "import pprint\n",
    "\n",
    "def calculate_similarities(products):\n",
    "    \"\"\"Calculate the similarity measures between all pairs of products.\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    products : list\n",
    "        A list of dictionaries containing the attributes of the products.\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    euclidean_similarities : numpy array\n",
    "        An array containing the Euclidean distance between each pair of products.\n",
    "    manhattan_distances : numpy array\n",
    "        An array containing the Manhattan distance between each pair of products.\n",
    "    cosine_similarities : numpy array\n",
    "        An array containing the cosine similarity between each pair of products.\n",
    "    jaccard_similarities : numpy array\n",
    "        An array containing the Jaccard index between each pair of products.\n",
    "    pearson_similarities : numpy array\n",
    "        An array containing the Pearson correlation coefficient between each pair of products.\n",
    "    \"\"\"\n",
    "    # Initialize arrays to store the similarity measures\n",
    "    euclidean_similarities = np.zeros((len(products), len(products)))\n",
    "    manhattan_distances = np.zeros((len(products), len(products)))\n",
    "    cosine_similarities = np.zeros((len(products), len(products)))\n",
    "    jaccard_similarities = np.zeros((len(products), len(products)))\n",
    "    pearson_similarities = np.zeros((len(products), len(products)))\n",
    "\n",
    "    # Calculate all the similarity measures in a single loop\n",
    "    for i in range(len(products)):\n",
    "        for j in range(i+1, len(products)):\n",
    "            p1 = products[i]['attributes']\n",
    "            p2 = products[j]['attributes']\n",
    "\n",
    "            # Calculate Euclidean distance\n",
    "            euclidean_similarities[i][j] = distance.euclidean(p1, p2)\n",
    "            euclidean_similarities[j][i] = euclidean_similarities[i][j]\n",
    "\n",
    "            # Calculate Manhattan distance\n",
    "            manhattan_distances[i][j] = distance.cityblock(p1, p2)\n",
    "            manhattan_distances[j][i] = manhattan_distances[i][j]\n",
    "\n",
    "            # Calculate cosine similarity\n",
    "            cosine_similarities[i][j] = cosine_similarity([p1], [p2])[0][0]\n",
    "            cosine_similarities[j][i] = cosine_similarities[i][j]\n",
    "\n",
    "            # Calculate Jaccard index\n",
    "            jaccard_similarities[i][j] = jaccard_similarity(p1, p2)\n",
    "            jaccard_similarities[j][i] = jaccard_similarities[i][j]\n",
    "\n",
    "            # Calculate Pearson correlation coefficient\n",
    "            pearson_similarities[i][j] = np.corrcoef(p1, p2)[0][1]\n",
    "            pearson_similarities[j][i] = pearson_similarities[i][j]\n",
    "            \n",
    "    return euclidean_similarities, manhattan_distances, cosine_similarities, jaccard_similarities, pearson_similarities\n",
    "\n",
    "def plot_similarities(similarities_list, labels, titles):\n",
    "    \"\"\"Plot the given similarities as heatmaps in subplots.\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    similarities_list : list of numpy arrays\n",
    "        A list of arrays containing the similarities between the products.\n",
    "    labels : list\n",
    "        A list of strings containing the labels for the products.\n",
    "    titles : list\n",
    "        A list of strings containing the titles for each plot.\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    None\n",
    "        This function does not return any values. It only plots the heatmaps.\n",
    "    \"\"\"\n",
    "    # Set up the plot\n",
    "    fig, ax = plt.subplots(nrows=1, \n",
    "                           ncols=len(similarities_list), figsize=(6*len(similarities_list), 6/1.680))\n",
    "\n",
    "    for i, similarities in enumerate(similarities_list):\n",
    "        # Plot the heatmap\n",
    "        sns.heatmap(similarities, xticklabels=labels, yticklabels=labels, ax=ax[i])\n",
    "        ax[i].set_title(titles[i])\n",
    "        ax[i].set_xlabel(\"Product\")\n",
    "        ax[i].set_ylabel(\"Product\")\n",
    "    \n",
    "    # Show the plot\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17961dbd-b85d-4ce1-889d-fb5e2d118080",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the products and their attributes\n",
    "products = [\n",
    "    {'name': 'Product 1', 'attributes': random.sample(range(1, 11), 5)},\n",
    "    {'name': 'Product 2', 'attributes': random.sample(range(1, 11), 5)},\n",
    "    {'name': 'Product 3', 'attributes': random.sample(range(1, 11), 5)},\n",
    "    {'name': 'Product 4', 'attributes': random.sample(range(1, 11), 5)},\n",
    "    {'name': 'Product 5', 'attributes': random.sample(range(1, 11), 5)}\n",
    "]\n",
    "\n",
    "pprint.pprint(products)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bafc61f-f777-447d-a422-f8ac5bdf2079",
   "metadata": {},
   "outputs": [],
   "source": [
    "euclidean_similarities, manhattan_distances, \\\n",
    "cosine_similarities, jaccard_similarities, \\\n",
    "pearson_similarities = calculate_similarities(products)\n",
    "\n",
    "# Set the labels for the x-axis and y-axis\n",
    "product_labels = [product['name'] for product in products]\n",
    "\n",
    "# List of similarity measures and their titles\n",
    "similarities_list = [euclidean_similarities, cosine_similarities, pearson_similarities, \n",
    "                     jaccard_similarities, manhattan_distances]\n",
    "titles = [\"Euclidean Distance\", \"Cosine Similarity\", \"Pearson Correlation Coefficient\", \n",
    "          \"Jaccard Index\", \"Manhattan Distance\"]\n",
    "\n",
    "# Plot the heatmaps\n",
    "plot_similarities(similarities_list, product_labels, titles)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a507819-e4f2-4b28-b1f4-fd215cfff40d",
   "metadata": {},
   "source": [
    "As we can see from the charts, each distance metric produces a heat map that represents different similarities between the products, and on a different scale. While each distance metric can be used to interpret whether two products are similar or not based on the metric's value, it is difficult to determine a true measure of similarity when comparing the results across different distance metrics."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5314124-5a7e-4865-9cd8-3b523d257a99",
   "metadata": {},
   "source": [
    "** **\n",
    "### How to choose the metric?\n",
    "\n",
    "There is no single \"true\" answer when it comes to choosing a distance metric, as different distance metrics are better suited for different types of data and different analysis goals. However, there are some factors that can help narrow down the possible distance metrics that might be appropriate for a given situation. Some things to consider when choosing a distance metric include:\n",
    "\n",
    "- The type of data you are working with: Some distance metrics are more appropriate for continuous data, while others are better suited for categorical or binary data.\n",
    "- The characteristics of the data: Different distance metrics are sensitive to different aspects of the data, such as the magnitudes of differences between attributes or the angles between attributes. Consider which characteristics of the data are most important to your analysis and choose a distance metric that is sensitive to these characteristics.\n",
    "- The goals of your analysis: Different distance metrics can highlight different patterns or relationships in the data, so consider what you are trying to learn from your analysis and choose a distance metric that is well-suited to this purpose.\n",
    "\n",
    "Personally, I often use the following chart as a starting point when choosing a distance metric.\n",
    "\n",
    "![flowchart](similaritymeasures.png)\n",
    "\n",
    "Again, it is important to carefully consider the data type and characteristics when selecting a similarity metric, as well as the specific goals of the analysis."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}


================================================
FILE: natural-language-processing/embedding-models/data/training_data.csv
================================================
base,ref,similarity
dresser,data scientist,0.5
clubhost,data scientist,0.5
co-pilot,data analyst,0.5
data analyst,textile quality manager,0.5
data analyst,tracer powder blender,0.5
data analyst,pottery and porcelain caster,0.5
data analyst,equine dental technician,0.5
data analyst," Equine dental technicians provide routine equine dental care, using appropriate equipment in accordance with national legislation. 
",0.5
data analyst,vessel assembly inspector,0.5
data analyst,bacteriology technician,0.5
data analyst,mover,0.5
data scientist,manager di fondi pensione,0.5
data scientist,takelaar,0.5
data scientist,tapijtknoper,0.5
data scientist,technicus waterleidingssystemen,0.5
data scientist,textile quality manager,0.5
data scientist,nailing machine operator,0.5
data scientist,hot foil operator,0.5
data scientist,I manager di fondi pensione coordinano i fondi pensione al fine di fornire prestazioni pensionistiche a individui o organizzazioni. Assicurano la gestione giornaliera dei fondi pensione e definiscono la politica strategica per lo sviluppo di nuovi pacchetti pensionistici.,0.5
data scientist,reisagent,0.5
data scientist,matrassenmaker,0.5
data scientist,emailleur,0.5
data scientist,meester-koffiebrander,0.5
data scientist,operatore di macchine per la produzione cartotecnica/operatrice su macchine per la produzione cartotecnica,0.5
data scientist,watch and clock repairer,0.5
data scientist,tapijtlegger,0.5
data scientist,credit union manager,0.5
data scientist,"Textile quality managers implement, manage and promote quality systems. They make sure that the textile products adhere to the quality standards of the organisation. Textile quality managers therefore inspect textile production lines and products.",0.5
data scientist,assemblagetechnicus accusystemen,0.5
data scientist,yeast distiller,0.5
data scientist,"Hot foil operators tend machines which apply a metallic foil on other materials using pressure cylinders and heating. They also mix colors, set up the appropriate machinery equipment and monitor printing.",0.5
data scientist,addetto alle operazioni fiscali/addetta alle operazioni fiscali,0.5
data scientist,viskooitechnicus,0.5
data scientist,teamleider in een mijn,0.5
rental manager,data scientist,0.5
train preparer,data scientist,0.5
ceiling installer,data analyst,0.5
igienista dentale,data scientist,0.5
cabin crew manager,data scientist,0.5
incisore su metallo,data scientist,0.5
justice of the peace,data analyst,0.5
import export manager,data scientist,0.5
conducente di autocarri,data scientist,0.5
conducente di autocarri,"I data scientist scoprono e interpretano fonti ricche di dati, gestiscono grandi quantità di dati, ne aggregano le fonti, garantiscono la coerenza degli insiemi di dati e creano visualizzazioni per contribuire alla loro comprensione. Costruiscono modelli matematici che utilizzano dati, presentano e comunicano informazioni e conoscenze sui dati agli specialisti e agli scienziati nella loro squadra e, se necessario, a un pubblico non specializzato e raccomandano modalità di applicazione dei dati.",0.5
footwear CAD patternmaker,data analyst,0.5
ufficiale di stato civile,data scientist,0.5
dental instrument assembler,data scientist,0.5
waste management supervisor,data analyst,0.5
carrosserie- en voertuigbouwer,data scientist,0.5
fuel station specialised seller,data analyst,0.5
productieleider chemische industrie,data scientist,0.5
marinaio addetto al servizio di coperta,data scientist,0.5
allenatore sportivo/allenatrice sportiva,data scientist,0.5
allenatore sportivo/allenatrice sportiva,"I data scientist scoprono e interpretano fonti ricche di dati, gestiscono grandi quantità di dati, ne aggregano le fonti, garantiscono la coerenza degli insiemi di dati e creano visualizzazioni per contribuire alla loro comprensione. Costruiscono modelli matematici che utilizzano dati, presentano e comunicano informazioni e conoscenze sui dati agli specialisti e agli scienziati nella loro squadra e, se necessario, a un pubblico non specializzato e raccomandano modalità di applicazione dei dati.",0.5
water conservation technician supervisor,data scientist,0.5
supervisore di assemblaggio di veicoli a motore,data scientist,0.5
tecnico della qualità dei prodotti di pelletteria,data scientist,0.5
responsabile della distribuzione di macchine e attrezzature agricole,data scientist,0.5
"wholesale merchant in agricultural raw materials, seeds and animal feeds",data analyst,0.5
confezionatore caseario artigianale /confezionatrice casearia artigianale,data scientist,0.5
confezionatore caseario artigianale /confezionatrice casearia artigianale,"I data scientist scoprono e interpretano fonti ricche di dati, gestiscono grandi quantità di dati, ne aggregano le fonti, garantiscono la coerenza degli insiemi di dati e creano visualizzazioni per contribuire alla loro comprensione. Costruiscono modelli matematici che utilizzano dati, presentano e comunicano informazioni e conoscenze sui dati agli specialisti e agli scienziati nella loro squadra e, se necessario, a un pubblico non specializzato e raccomandano modalità di applicazione dei dati.",0.5
import-exportmanager ijzerwaren en producten voor loodgieterij en verwarming,data scientist,0.5
addetto all’assemblaggio di strumenti di precisione/addetta all’assemblaggio di strumenti di precisione,data scientist,0.5
data analyst,"Data analysts import, inspect, clean, transform, validate, model, or interpret collections of data with regard to the business goals of the company. They ensure that the data sources and repositories provide consistent and reliable data. Data analysts use different algorithms and IT tools as demanded by the situation and the current data. They might prepare reports in the form of visualisations such as graphs, charts, and dashboards.",1.0
data scientist,"Data scientists find and interpret rich data sources, manage large amounts of data, merge data sources, ensure consistency of data-sets, and create visualisations to aid in understanding data. They build mathematical models using data, present and communicate data insights and findings to specialists and scientists in their team and if required, to a non-expert audience, and recommend ways to apply the data.",1.0
data scientist,"Data scientists zoeken en interpreteren rijke gegevensbronnen, beheren grote hoeveelheden gegevens, voegen gegevensbronnen samen, zorgen voor de consistentie van datasets en creëren visualisaties om te helpen gegevens te begrijpen. Zij bouwen wiskundige modellen op basis van data, presenteren en communiceren gegevensinzichten en bevindingen aan specialisten en wetenschappers in hun team en, indien nodig, aan een niet-deskundig publiek, en bevelen manieren aan om de data toe te passen.",1.0
data scientist,"I data scientist scoprono e interpretano fonti ricche di dati, gestiscono grandi quantità di dati, ne aggregano le fonti, garantiscono la coerenza degli insiemi di dati e creano visualizzazioni per contribuire alla loro comprensione. Costruiscono modelli matematici che utilizzano dati, presentano e comunicano informazioni e conoscenze sui dati agli specialisti e agli scienziati nella loro squadra e, se necessario, a un pubblico non specializzato e raccomandano modalità di applicazione dei dati.",1.0
cosmologist,cosmology data scientist,0.74681668889336494
data analyst,data storage analyst,0.74681668889336494
data analyst,data warehouse analyst,0.74681668889336494
data analyst,data warehousing analyst,0.74681668889336494
meter reader,metering data analyst,0.74681668889336494
statistician,statistical data analyst,0.74681668889336494
data scientist,esperta di dati,0.74681668889336494
data scientist,data engineer,0.74681668889336494
data scientist,data-scientist,0.74681668889336494
data scientist,data research scientist,0.74681668889336494
data scientist,esperto di dati,0.74681668889336494
data scientist,data expert,0.74681668889336494
data scientist,data analyst,0.74681668889336494
data scientist,research data scientist,0.74681668889336494
data scientist,research analist,0.74681668889336494
data scientist,research data scientist,0.74681668889336494
data scientist,analista dei dati di ricerca,0.74681668889336494
data scientist,business data scientist,0.74681668889336494
analista dei dati,data analyst,0.74681668889336494
call centre analyst,sales data analyst,0.74681668889336494
call centre analyst,CRM data analyst,0.74681668889336494
call centre analyst,customer data analyst,0.74681668889336494
call centre analyst,senior data analyst,0.74681668889336494
call centre analyst,assistant data analyst,0.74681668889336494
call centre analyst,trainee data analyst,0.74681668889336494
call centre analyst,graduate data analyst,0.74681668889336494
call centre analyst,marketing data analyst,0.74681668889336494
call centre analyst,IT data analyst,0.74681668889336494
analyste de données,data analyst,0.74681668889336494
bioinformatics scientist,data scientist,0.74681668889336494
scientifique des données,data scientist,0.74681668889336494

================================================
FILE: natural-language-processing/embedding-models/domain_adaption_fine_tune_nlp_model.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "cba912be-879a-4a9b-a7e9-180f42555cb9",
   "metadata": {},
   "source": [
    "## Domain Adaption: Fine-Tune Pre-Trained NLP Models\n",
    "\n",
    "### Introduction\n",
    "In today's world, the availability of pre-trained NLP models has greatly simplified the interpretation of textual data using deep learning techniques. However, while these models excel in general tasks, they often lack adaptability to specific domains. This comprehensive guide aims to walk you through the process of fine-tuning pre-trained NLP models to achieve improved performance in a particular domain.\n",
    "\n",
    "#### Motivation\n",
    "Although pre-trained NLP models like BERT and the Universal Sentence Encoder (USE) are effective in capturing linguistic intricacies, their performance in domain-specific applications can be limited due to the diverse range of datasets they are trained on. This limitation becomes evident when analyzing relationships within a specific domain. \n",
    "\n",
    "For example, when working with employment data, we expect the model to recognize the closer proximity between the roles of 'Data Scientist' and 'Machine Learning Engineer', or the stronger association between 'Python' and 'TensorFlow'. Unfortunately, general-purpose models often miss these nuanced relationships.\n",
    "\n",
    "To address this issue, we can fine-tune pre-trained models with high-quality, domain-specific datasets. This adaptation process significantly enhances the model's performance and precision, fully unlocking the potential of the NLP model.\n",
    "\n",
    "When dealing with large pre-trained NLP models, it is advisable to initially deploy the base model and consider fine-tuning only if its performance falls short for the specific problem at hand.\n",
    " \n",
    "This tutorial focuses on fine-tuning the Universal Sentence Encoder (USE) model using easily accessible open-source data.\n",
    "\n",
    "### Theoretical Overview\n",
    "Fine-tuning an ML model can be achieved through various strategies, such as supervised learning and reinforcement learning. In this tutorial, we will concentrate on a one(few)-shot learning approach combined with a siamese architecture for the fine-tuning process.\n",
    "\n",
    "#### Methodology\n",
    "In this tutorial, we utilize a siamese neural network, which is a specific type of Artificial Neural Network. This network leverages shared weights while simultaneously processing two distinct input vectors to compute comparable output vectors. Inspired by one-shot learning, this approach has proven to be particularly effective in capturing semantic similarity, although it may require longer training times and lack probabilistic output.\n",
    "\n",
    "A Siamese Neural Network creates an 'embedding space' where related concepts are positioned closely, enabling the model to better discern semantic relations.\n",
    "- Twin Branches and Shared Weights: The architecture consists of two identical branches, each containing an embedding layer with shared weights. These dual branches handle two inputs simultaneously, either similar or dissimilar.\n",
    "- Similarity and Transformation: The inputs are transformed into vector embeddings using the pre-trained NLP model. The architecture then calculates the similarity between the vectors. The similarity score, ranging between -1 and 1, quantifies the angular distance between the two vectors, serving as a metric for their semantic similarity.\n",
    "- Contrastive Loss and Learning: The model's learning is guided by the \"Contrastive Loss,\" which is the difference between the expected output (similarity score from the training data) and the computed similarity. This loss guides the adjustment of the model's weights to minimize the loss and enhance the quality of the learned embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "13a3c7e3-4b5f-4369-b6b0-e0ea5b43f425",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import math\n",
    "import tensorflow as tf\n",
    "import tensorflow_hub as hub\n",
    "from tensorflow import keras\n",
    "from tensorflow_text import SentencepieceTokenizer\n",
    "import os\n",
    "from datetime import datetime\n",
    "import numpy as np\n",
    "from utils import *"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c770e7f-5a4e-42d7-89d6-a8eb0eec8210",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Data Overview\n",
    "\n",
    "For the fine-tuning of pre-trained NLP models using this method, the training data should consist of pairs of text strings accompanied by similarity scores between them. \n",
    "\n",
    "In this tutorial, we use a dataset sourced from the ESCO classification dataset, which has been transformed to generate similarity scores based on the relationships between different data elements.\n",
    "\n",
    "Preparing the training data is a crucial step in the fine-tuning process. It is assumed that you have access to the required data and a method to transform it into the specified format. Since the focus of this article is to demonstrate the fine-tuning process, we will omit the details of how the data was generated using the ESCO dataset.\n",
    "\n",
    "Let's start by examining the training data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "94c76e58-475b-49b4-a97e-5c7b1e6e2630",
   "metadata": {},
   "outputs": [],
   "source": [
    "# The data from this file is stored in the variable \"data\".\n",
    "data = pd.read_csv(\"./data/training_data.csv\")\n",
    "\n",
    "# Use the head function on the DataFrame to display its first 5 rows.\n",
    "data.head()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20dc8c3c-2835-4472-894f-12239de8173b",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Baseline Model\n",
    "To begin, we establish the multilingual universal sentence encoder as our baseline model. It is essential to set this baseline before proceeding with the fine-tuning process.\n",
    "\n",
    "For this tutorial, we will use the STS benchmark and a sample similarity visualization as metrics to evaluate the changes and improvements achieved through the fine-tuning process.\n",
    "\n",
    "The STS Benchmark dataset consists of English sentence pairs, each associated with a similarity score. During the model training process, we evaluate the model's performance on this benchmark set. The persisted scores for each training run are the Pearson correlation between the predicted similarity scores and the actual similarity scores in the dataset. \n",
    "\n",
    "These scores ensure that as the model is fine-tuned with our context-specific training data, it maintains some level of generalizability."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f643d10f-716e-4982-984e-48f0bbf2e3df",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Loads the Universal Sentence Encoder Multilingual module from TensorFlow Hub.\n",
    "base_model_url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n",
    "base_model = tf.keras.Sequential([\n",
    "    hub.KerasLayer(base_model_url,\n",
    "                   input_shape=[],\n",
    "                   dtype=tf.string,\n",
    "                   trainable=False)\n",
    "])\n",
    "\n",
    "# Defines a list of test sentences. These sentences represent various job titles.\n",
    "test_text = ['Data Scientist', 'Data Analyst', 'Data Engineer',\n",
    "             'Nurse Practitioner', 'Registered Nurse', 'Medical Assistant',\n",
    "             'Social Media Manager', 'Marketing Strategist', 'Product Marketing Manager']\n",
    "\n",
    "# Creates embeddings for the sentences in the test_text list. \n",
    "# The np.array() function is used to convert the result into a numpy array.\n",
    "# The .tolist() function is used to convert the numpy array into a list, which might be easier to work with.\n",
    "vectors = np.array(base_model.predict(test_text)).tolist()\n",
    "\n",
    "# Calls the plot_similarity function to create a similarity plot.\n",
    "plot_similarity(test_text, vectors, 90, \"base model\")\n",
    "\n",
    "# Computes STS benchmark score for the base model\n",
    "pearsonr = sts_benchmark(base_model)\n",
    "print(\"STS Benachmark: \" + str(pearsonr))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51a178d6-760d-4c0b-8994-0bac82118a98",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Fine Tuning the Model\n",
    "The next step involves constructing the siamese model architecture using the baseline model and fine-tuning it with our domain-specific data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f65eca14-f5c4-41f5-8048-ce7480759a33",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the pre-trained word embedding model\n",
    "embedding_layer = hub.load(base_model_url)\n",
    "\n",
    "# Create a Keras layer from the loaded embedding model\n",
    "shared_embedding_layer = hub.KerasLayer(embedding_layer, trainable=True)\n",
    "\n",
    "# Define the inputs to the model\n",
    "left_input = keras.Input(shape=(), dtype=tf.string)\n",
    "right_input = keras.Input(shape=(), dtype=tf.string)\n",
    "\n",
    "# Pass the inputs through the shared embedding layer\n",
    "embedding_left_output = shared_embedding_layer(left_input)\n",
    "embedding_right_output = shared_embedding_layer(right_input)\n",
    "\n",
    "# Compute the cosine similarity between the embedding vectors\n",
    "cosine_similarity = tf.keras.layers.Dot(axes=-1, normalize=True)(\n",
    "    [embedding_left_output, embedding_right_output]\n",
    ")\n",
    "\n",
    "# Convert the cosine similarity to angular distance\n",
    "pi = tf.constant(math.pi, dtype=tf.float32)\n",
    "clip_cosine_similarities = tf.clip_by_value(\n",
    "    cosine_similarity, -0.99999, 0.99999\n",
    ")\n",
    "acos_distance = 1.0 - (tf.acos(clip_cosine_similarities) / pi)\n",
    "\n",
    "# Package the model\n",
    "encoder = tf.keras.Model([left_input, right_input], acos_distance)\n",
    "\n",
    "# Compile the model\n",
    "encoder.compile(\n",
    "    optimizer=tf.keras.optimizers.Adam(\n",
    "        learning_rate=0.00001,\n",
    "        beta_1=0.9,\n",
    "        beta_2=0.9999,\n",
    "        epsilon=0.0000001,\n",
    "        amsgrad=False,\n",
    "        clipnorm=1.0,\n",
    "        name=\"Adam\",\n",
    "    ),\n",
    "    loss=tf.keras.losses.MeanSquaredError(\n",
    "        reduction=keras.losses.Reduction.AUTO, name=\"mean_squared_error\"\n",
    "    ),\n",
    "    metrics=[\n",
    "        tf.keras.metrics.MeanAbsoluteError(),\n",
    "        tf.keras.metrics.MeanAbsolutePercentageError(),\n",
    "    ],\n",
    ")\n",
    "\n",
    "# Print the model summary\n",
    "encoder.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "250e87af-306a-47b9-8349-ca64c54f06ac",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "early_stop = keras.callbacks.EarlyStopping(\n",
    "                monitor=\"loss\", patience=3, min_delta=0.001\n",
    "            )\n",
    "logdir = os.path.join(\n",
    "                \".\",\n",
    "                \"logs/fit/\" + datetime.now().strftime(\"%Y%m%d-%H%M%S\"),\n",
    "            )\n",
    "tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)\n",
    "\n",
    "# Model Input\n",
    "left_inputs, right_inputs, similarity = process_model_input(data)\n",
    "\n",
    "history = encoder.fit(\n",
    "                [left_inputs, right_inputs],\n",
    "                similarity,\n",
    "                batch_size=8,\n",
    "                epochs=20,\n",
    "                validation_split=0.2,\n",
    "                callbacks=[early_stop, tensorboard_callback],\n",
    "            )\n",
    "\n",
    "inputs = keras.Input(shape=[], dtype=tf.string)\n",
    "embedding = hub.KerasLayer(embedding_layer)(inputs)\n",
    "\n",
    "tuned_model = keras.Model(inputs=inputs, outputs=embedding)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06975f60-e3a1-4c46-b5fa-3bc2512061a6",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Evaluation\n",
    "\n",
    "Now that we have the fine-tuned model, let's re-evaluate it and compare the results to those of the base model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3260ff05-07aa-4c55-9509-4ca60fada326",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Creates embeddings for the sentences in the test_text list. \n",
    "# The np.array() function is used to convert the result into a numpy array.\n",
    "# The .tolist() function is used to convert the numpy array into a list, which might be easier to work with.\n",
    "vectors = np.array(tuned_model.predict(test_text)).tolist()\n",
    "\n",
    "# Calls the plot_similarity function to create a similarity plot.\n",
    "plot_similarity(test_text, vectors, 90, \"tuned model\")\n",
    "\n",
    "# Computes STS benchmark score for the tuned model\n",
    "pearsonr = sts_benchmark(tuned_model)\n",
    "print(\"STS Benachmark: \" + str(pearsonr))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8807399d-3360-4d7b-8413-4eb0c033e922",
   "metadata": {},
   "source": [
    "Based on fine-tuning the model on the relatively small dataset, the STS benchmark score is comparable to that of the baseline model, indicating that the tuned model still exhibits generalizability. However, the similarity visualization demonstrates strengthened similarity scores between similar titles and a reduction in scores for dissimilar ones."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "114c50ab-9d12-4c3b-8ea9-f4ca2725efd3",
   "metadata": {},
   "source": [
    "** **\n",
    "### Closing Thoughts\n",
    "\n",
    "Fine-tuning pre-trained NLP models for domain adaptation is a powerful technique to improve their performance and precision in specific contexts. By utilizing quality, domain-specific datasets and leveraging siamese neural networks, we can enhance the model's ability to capture semantic similarity.\n",
    "\n",
    "This tutorial provided a step-by-step guide to the fine-tuning process, using the Universal Sentence Encoder (USE) model as an example. We explored the theoretical framework, data preparation, baseline model evaluation, and the actual fine-tuning process. The results demonstrated the effectiveness of fine-tuning in strengthening similarity scores within a domain.\n",
    "\n",
    "By following this approach and adapting it to your specific domain, you can unlock the full potential of pre-trained NLP models and achieve better results in your natural language processing tasks"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}


================================================
FILE: natural-language-processing/embedding-models/pyproject.toml
================================================
[tool.poetry]
name = "embedding-models"
version = "0.1.0"
description = ""
authors = ["Shashank Kapadia <smhkapadia@gmail.com>"]
readme = "README.md"

[tool.poetry.dependencies]
python = ">=3.9,<3.10"
jupyterlab = "^4.0.2"
tensorflow-hub = "^0.13.0"
tensorflow = [
    { version = "2.10.0", platform = "linux" },
]
tensorflow-macos = [
    { version = "2.10.0", platform = "darwin" },
]
#tensorflow-text = "^2.10.0"
tensorflow-text = [
    {file = "packages/tensorflow_text-2.10.0-cp39-cp39-macosx_11_0_arm64.whl", platform = "darwin"},
]
pandas = "^2.0.3"
seaborn = "^0.12.2"
tqdm = "^4.65.0"


[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"


================================================
FILE: natural-language-processing/embedding-models/utils.py
================================================
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
import tqdm
from scipy import stats
import pandas as pd
import os
import math


def plot_similarity(labels, features, rotation, version):
    corr = np.inner(features, features)
    sns.set(font_scale=1.2)
    g = sns.heatmap(
        corr,
        xticklabels=labels,
        yticklabels=labels,
        vmin=0,
        vmax=1,
        cmap="YlOrRd")
    g.set_xticklabels(labels, rotation=rotation)
    g.set_title(version)

def cosine_similarity(vector_1, vector_2):
    """
    Compute cosine similarity between two vectors.

    Args:
        vector_1 (List[float]): A list of float values representing the first vector.
        vector_2 (List[float]): A list of float values representing the second vector.

    Returns:
        float: The cosine similarity between the two vectors, which is the dot product of
        the two vectors divided by the product of their magnitudes.
    """
    sumxx, sumxy, sumyy = 0, 0, 0
    for i in range(len(vector_1)):
        x = vector_1[i]
        y = vector_2[i]
        sumxx += x * x
        sumyy += y * y
        sumxy += x * y
    return sumxy / math.sqrt(sumxx * sumyy)


def sts_benchmark(model, type="dev"):
    """
    Compute the Pearson correlation between predicted cosine similarity scores and human-labeled similarity scores
    on the STS benchmark dataset.

    Args:
        model (tensorflow.keras.Model): A trained sentence embedding model that takes in an input sentence and
                                        outputs a corresponding sentence embedding.
        type (str): The type of STS benchmark dataset to use. Either "dev" for the development dataset or "test"
                    for the test dataset. Default is "dev".

    Returns:
        tuple: A tuple containing the Pearson correlation coefficient and the p-value of the correlation test.
    """

    def _get_sts_dataset(type="test"):
        """

        :param type:
        :return:
        """

        sts_dataset = tf.keras.utils.get_file(
            fname="Stsbenchmark.tar.gz",
            origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
            extract=True,
        )
        if type == "dev":
            data = pd.read_table(
                os.path.join(
                    os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"
                ),
                on_bad_lines="skip",
                engine="python",
                skip_blank_lines=True,
                usecols=[4, 5, 6],
                names=["sim", "sent_1", "sent_2"],
            )
        else:
            data = pd.read_table(
                os.path.join(
                    os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"
                ),
                on_bad_lines="skip",
                engine="python",
                skip_blank_lines=True,
                usecols=[4, 5, 6],
                names=["sim", "sent_1", "sent_2"],
            )

        return data

    data = _get_sts_dataset(type=type)
    data = data[[isinstance(s, str) for s in data["sent_2"]]].reset_index()

    # prepare data
    base_text = [data["sent_1"][i] for i in range(len(data))]
    ref_text = [data["sent_2"][i] for i in range(len(data))]
    scores = data["sim"].tolist()

    base_vectors = []
    ref_vectors = []

    # get text vectors from base, tuned, pre-tuned models
    for i in range(len(base_text)):
        base_vectors.append(list(model.predict([base_text[i]])[0]))
        ref_vectors.append(list(model.predict([ref_text[i]])[0]))

    base_cosine_similarity = [
        cosine_similarity(base_vectors[i], ref_vectors[i])
        for i in range(len(base_text))
    ]
    return stats.pearsonr(scores, base_cosine_similarity)

def process_model_input(data):
    """
    Processes the input data by reshaping the left and right inputs and converting the similarity values.

    Args:
        data: (dict) A dictionary containing the keys "base", "ref", and "similarity",
            with values corresponding to the input base text, reference text, and similarity values respectively.

    Returns:
        tuple: A tuple of three numpy arrays containing the preprocessed left inputs, right inputs,
            and similarity values respectively.
    """
    text_list = [list(data["base"].values), list(data["ref"].values)]
    left_inputs = np.asarray(text_list[0])
    right_inputs = np.asarray(text_list[1])
    left_inputs = left_inputs.reshape(
        left_inputs.shape[0],
    )
    right_inputs = right_inputs.reshape(
        right_inputs.shape[0],
    )

    # 1 if we inputs are semantically similiar, 0 if not.
    # Check the distance function defined as 1-arccos(similiarity)/pi which has range between 1,0 for domain 0 to 1
    similarity = np.asarray(list(data["similarity"].values))

    return left_inputs, right_inputs, similarity



================================================
FILE: natural-language-processing/text-processing/Building Blocks Text Pre-Processing.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Building Blocks: Text Pre-Processing\n",
    "\n",
    "This article is the second of more to come articles on Natural Language Processing. The purpose of this series of articles is to document my journey as I learn about this subject, as well as help others gain efficiency from it.\n",
    "\n",
    "In the last article of our series, we introduced the concept of Natural Language Processing, you can read it here, and now you probably want to try it yourself, right? Great! Without further ado, let's dive in to the building blocks for statistical natural language processing. \n",
    "\n",
    "In this article, we'll introduce the key concepts, along with practical implementation in Python and the challenges to keep in mind at the time of application.\n",
    "\n",
    "**References:**\n",
    "- A General Approach to Preprocessing Text Data — KDnuggets. https://www.kdnuggets.com/2017/12/general-approach-preprocessing-text-data.html\n",
    "- Tokenization — Stanford NLP Group. https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html\n",
    "- Text Mining in Bovine Diseases — ijcaonline.org. https://www.ijcaonline.org/volume6/number10/pxc3871454.pdf\n",
    "- Stemming and lemmatization — Stanford NLP Group. https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html\n",
    "** **"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Text Normalization\n",
    "\n",
    "Normalizing the text means converting it to a more convenient, standard form before performing turning it to features for higher level modeling. Think of this step as converting human readable language into a form that is machine readable.\n",
    "\n",
    "The standard framework to normalize the text includes:\n",
    "1. Tokenization\n",
    "2. Stop Words Removal\n",
    "3. Morphological Normalization\n",
    "4. Collocation\n",
    "\n",
    "Data preprocessing consists of a number of steps, any number of which may or not apply to a given task. More generally, in this article we'll discuss some predetermined body of text, and perform some basic transformative analysis that can be used for performing further, more meaningful natural language processing\n",
    "\n",
    "** **\n",
    "#### Tokenization\n",
    "\n",
    "Given a character sequence and a defined document unit (blurb of texts), tokenization is the task of chopping it up into pieces, called tokens, perhaps at the same time throwing away certain characters/words, such as punctuation. Ordinarily, there are two types of tokenization:\n",
    "\n",
    "1. Word Tokenization: Used to separate words via unique space character. Depending on the application, word tokenization may also tokenize multi-word expressions like New York. This is often times is closely tied to a process called Named Entity Recognition. Later in this tutorial, we will look at Collocation (Phrase) Modeling that helps address part of this challenge\n",
    "\n",
    "2. Sentence Tokenization/Segmentation: Along with word tokenization, sentence segmentation is a crucial step in text processing. This is usually performed based on punctuations such as \".\", \"?\", \"!\" as they tend to mark the sentence boundaries\n",
    "\n",
    "**Challenges:**\n",
    "- The use of abbreviations may prompt the tokenizer to detect a sentence boundary where there is none. \n",
    "- Numbers, special characters, hyphenation, and capitalization. In the expressions \"don't,\" \"I'd,\" \"John's\" do we have one, two or three tokens?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import nltk\n",
    "\n",
    "nltk.download('punkt')\n",
    "nltk.download('stopwords')\n",
    "nltk.download('wordnet')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.tokenize import sent_tokenize, word_tokenize\n",
    "\n",
    "#Sentence Tokenization\n",
    "print ('Following is the list of sentences tokenized from the sample review\\n')\n",
    "\n",
    "sample_text = \"\"\"The first time I ate here I honestly was not that impressed. I decided to wait a bit and give it another chance. \n",
    "I have recently eaten there a couple of times and although I am not convinced that the pricing is particularly on point the two mushroom and \n",
    "swiss burgers I had were honestly very good. The shakes were also tasty. Although Mad Mikes is still my favorite burger around, \n",
    "you can do a heck of a lot worse than Smashburger if you get a craving\"\"\"\n",
    "\n",
    "tokenize_sentence = sent_tokenize(sample_text)\n",
    "\n",
    "print (tokenize_sentence)\n",
    "print ('---------------------------------------------------------\\n')\n",
    "print ('Following is the list of words tokenized from the sample review sentence\\n')\n",
    "tokenize_words = word_tokenize(tokenize_sentence[1])\n",
    "print (tokenize_words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Stop Words Removal\n",
    "Often, there are a few ubiquitous words which would appear to be of little value in helping the purpose of analysis but increases the dimensionality of feature set, are excluded from the vocabulary entirely as the part of stop words removal process. There are two considerations usually that motivate this removal.\n",
    "\n",
    "1. Irrelevance: Allows one to analyze only on content-bearing words. Stopwords, also called empty words because they generally do not bear much meaning, introduce noise in the analysis/modeling process\n",
    "2. Dimension: Removing the stopwords also allows one to reduce the tokens in documents significantly, and thereby decreasing feature dimension\n",
    "\n",
    "**Challenges:**\n",
    "\n",
    "Converting all characters into lowercase letters before stopwords removal process can introduce ambiguity in the text, and sometimes entirely changing the meaning of it. For example, with the expressions \"US citizen\" will be viewed as \"us citizen\" or \"IT scientist\" as \"it scientist\". Since both *us* and *it* are normally considered stop words, it would result in an inaccurate outcome. The strategy regarding the treatment of stopwords can thus be refined by identifying that \"US\" and \"IT\" are not pronouns in the above examples, through a part-of-speech tagging step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.corpus import stopwords\n",
    "from nltk.tokenize import word_tokenize\n",
    "\n",
    "# define the language for stopwords removal\n",
    "stopwords = set(stopwords.words(\"english\"))\n",
    "print (\"\"\"{0} stop words\"\"\".format(len(stopwords)))\n",
    "\n",
    "tokenize_words = word_tokenize(sample_text)\n",
    "filtered_sample_text = [w for w in tokenize_words if not w in stopwords]\n",
    "\n",
    "print ('\\nOriginal Text:')\n",
    "print ('------------------\\n')\n",
    "print (sample_text)\n",
    "print ('\\n Filtered Text:')\n",
    "print ('------------------\\n')\n",
    "print (' '.join(str(token) for token in filtered_sample_text))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Morphological Normalization\n",
    "Morphology, in general, is the study of the way words are built up from smaller meaning-bearing units, morphomes. For example, dogs consists of two morphemes: dog and s\n",
    "\n",
    "Two commonly used techniques for text normalization are:\n",
    "\n",
    "1. Stemming: The procedure aims to identify the stem of a word and use it in lieu of the word itself. The most popular algorithm for stemming English, and one that has repeatedly been shown to be empirically very effective, is Porter's algorithm. The entire algorithm is too long and intricate to present here, but you can find details here\n",
    "2. Lemmatization: This process refers to doing things correctly with the use of vocabulary and morphological analysis of words, typically aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.\n",
    "\n",
    "If confronted with the token saw, stemming might return just s, whereas lemmatization would attempt to return either see or saw depending on whether the use of the token was as a verb or a noun"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.stem import PorterStemmer\n",
    "from nltk.stem import WordNetLemmatizer\n",
    "from nltk.tokenize import word_tokenize\n",
    "\n",
    "ps = PorterStemmer()\n",
    "lemmatizer = WordNetLemmatizer()\n",
    "\n",
    "tokenize_words = word_tokenize(sample_text)\n",
    "\n",
    "stemmed_sample_text = []\n",
    "for token in tokenize_words:\n",
    "    stemmed_sample_text.append(ps.stem(token))\n",
    "\n",
    "lemma_sample_text = []\n",
    "for token in tokenize_words:\n",
    "    lemma_sample_text.append(lemmatizer.lemmatize(token))\n",
    "    \n",
    "print ('\\nOriginal Text:')\n",
    "print ('------------------\\n')\n",
    "print (sample_text)\n",
    "\n",
    "print ('\\nFiltered Text: Stemming')\n",
    "print ('------------------\\n')\n",
    "print (' '.join(str(token) for token in stemmed_sample_text))\n",
    "\n",
    "print ('\\nFiltered Text: Lemmatization')\n",
    "print ('--------------------------------\\n')\n",
    "print (' '.join(str(token) for token in lemma_sample_text))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "**Challenges:**\n",
    "\n",
    "Often, full morphological analysis produces at most very modest benefits for analysis. Neither form of normalization improve language information performance in aggregate, both from relevance and dimensionality reduction standpoint - at least not for the following situations:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.stem import PorterStemmer\n",
    "words = [\"operate\", \"operating\", \"operates\", \"operation\", \"operative\", \"operatives\", \"operational\"]\n",
    "\n",
    "ps = PorterStemmer()\n",
    "\n",
    "for token in words:\n",
    "    print (ps.stem(token))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "As an example of what can go wrong, note that the Porter stemmer stems all of the following words to oper\n",
    "However, since operate in its various forms is a common verb, we would expect to lose considerable precision:\n",
    "- operational AND research\n",
    "- operating AND system\n",
    "- operative AND dentistry\n",
    "\n",
    "For cases like these, moving to using a lemmatizer would not completely fix the problem because particular inflectional forms are used in specific collocations. Getting better value from term normalization depends more on pragmatic issues of word use than on formal issues of linguistic morphology"
   ]
  }
 ],
 "metadata": {
  "environment": {
   "name": "common-cpu.m49",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/base-cpu:m49"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}


================================================
FILE: natural-language-processing/text-processing/pyproject.toml
================================================
[tool.poetry]
name = "text-processing"
version = "0.1.0"
description = ""
authors = ["Shashank Kapadia <smhkapadia@gmail.com>"]
readme = "README.md"
packages = [{include = "text_processing"}]

[tool.poetry.dependencies]
python = "^3.10"
jupyterlab = "^3.5.2"
nltk = "^3.8.1"


[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"


================================================
FILE: natural-language-processing/topic-modeling/Evaluate Topic Models.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Evaluate Topic Model in Python: Latent Dirichlet Allocation (LDA)\\\n",
    "##### A step-by-step guide to building interpretable topic models\n",
    "\n",
    "** **\n",
    "*Preface: This article aims to provide consolidated information on the underlying topic and is not to be considered as the original work. The information and the code are repurposed through several online articles, research papers, books, and open-source code*\n",
    "** **\n",
    "\n",
    "In the previous [article](https://towardsdatascience.com/end-to-end-topic-modeling-in-python-latent-dirichlet-allocation-lda-35ce4ed6b3e0), I introduced the concept of topic modeling and walked through the code for developing your first topic model using Latent Dirichlet Allocation (LDA) method in the python using Gensim implementation.\n",
    "\n",
    "Pursuing on that understanding, in this article, we’ll go a few steps deeper by outlining the framework to quantitatively evaluate topic models through the measure of topic coherence and share the code template in python using Gensim implementation to allow for end-to-end model development.\n",
    "\n",
    "### Why evaluate topic models?\n",
    "\n",
    "![img](https://tinyurl.com/y3xznjwq)\n",
    "\n",
    "We know probabilistic topic models, such as LDA, are popular tools for text analysis, providing both a predictive and latent topic representation of the corpus. However, there is a longstanding assumption that the latent space discovered by these models is generally meaningful and useful, and that evaluating such assumptions is challenging due to its unsupervised training process. Besides, there is a no-gold standard list of topics to compare against every corpus.\n",
    "\n",
    "Nevertheless, it is equally important to identify if a trained model is objectively good or bad, as well have an ability to compare different models/methods. To do so, one would require an objective measure for the quality. Traditionally, and still for many practical applications, to evaluate if “the correct thing” has been learned about the corpus, an implicit knowledge and “eyeballing” approaches are used. Ideally, we’d like to capture this information in a single metric that can be maximized, and compared.\n",
    "\n",
    "Let’s take a look at roughly what approaches are commonly used for the evaluation:\n",
    "\n",
    "**Eye Balling Models**\n",
    "- Top N words\n",
    "- Topics / Documents\n",
    "\n",
    "**Intrinsic Evaluation Metrics**\n",
    "- Capturing model semantics\n",
    "- Topics interpretability\n",
    "\n",
    "**Human Judgements**\n",
    "- What is a topic\n",
    "\n",
    "**Extrinsic Evaluation Metrics/Evaluation at task**\n",
    "- Is model good at performing predefined tasks, such as classification\n",
    "\n",
    "Natural language is messy, ambiguous and full of subjective interpretation, and sometimes trying to cleanse ambiguity reduces the language to an unnatural form. In this article, we’ll explore more about topic coherence, an intrinsic evaluation metric, and how you can use it to quantitatively justify the model selection.\n",
    "\n",
    "### What is Topic Coherence?\n",
    "\n",
    "Before we understand topic coherence, let’s briefly look at the perplexity measure. Perplexity as well is one of the intrinsic evaluation metric, and is widely used for language model evaluation. It captures how surprised a model is of new data it has not seen before, and is measured as the normalized log-likelihood of a held-out test set. \n",
    "\n",
    "Focussing on the log-likelihood part, you can think of the perplexity metric as measuring how probable some new unseen data is given the model that was learned earlier. That is to say, how well does the model represent or reproduce the statistics of the held-out data.\n",
    "\n",
    "However, recent studies have shown that predictive likelihood (or equivalently, perplexity) and human judgment are often not correlated, and even sometimes slightly anti-correlated.\n",
    "\n",
    "*Optimizing for perplexity may not yield human interpretable topics*\n",
    "\n",
    "This limitation of perplexity measure served as a motivation for more work trying to model the human judgment, and thus *Topic Coherence*.\n",
    "\n",
    "The concept of topic coherence combines a number of measures into a framework to evaluate the coherence between topics inferred by a model. But before that…\n",
    "\n",
    "#### What is topic coherence?\n",
    "Topic Coherence measures score a single topic by measuring the degree of semantic similarity between high scoring words in the topic. These measurements help distinguish between topics that are semantically interpretable topics and topics that are artifacts of statistical inference. But,\n",
    "\n",
    "#### What is coherence?\n",
    "Topic Coherence measures score a single topic by measuring the degree of semantic similarity between high scoring words in the topic. These measurements help distinguish between topics that are semantically interpretable topics and topics that are artifacts of statistical inference. But …\n",
    "\n",
    "### Coherence Measures\n",
    "Let’s take quick look at different coherence measures, and how they are calculated:\n",
    "\n",
    "1. `C_v` measure is based on a sliding window, one-set segmentation of the top words and an indirect confirmation measure that uses normalized pointwise mutual information (NPMI) and the cosine similarity\n",
    "2. `C_p` is based on a sliding window, one-preceding segmentation of the top words and the confirmation measure of Fitelson's coherence\n",
    "3. `C_uci` measure is based on a sliding window and the pointwise mutual information (PMI) of all word pairs of the given top words\n",
    "4. `C_umass` is based on document cooccurrence counts, a one-preceding segmentation and a logarithmic conditional probability as confirmation measure\n",
    "5. `C_npmi` is an enhanced version of the C_uci coherence using the normalized pointwise mutual information (NPMI)\n",
    "6. `C_a` is based on a context window, a pairwise comparison of the top words and an indirect confirmation measure that uses normalized pointwise mutual information (NPMI) and the cosine similarity\n",
    "\n",
    "There is, of course, a lot more to the concept of topic model evaluation, and the coherence measure. However, keeping in mind the length, and purpose of this article, let’s apply these concepts into developing a model that is at least better than with the default parameters. Also, we’ll be re-purposing already available online pieces of code to support this exercise instead of re-inventing the wheel."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Model Implementation\n",
    "1. Loading Data\n",
    "2. Data Cleaning\n",
    "3. Phrase Modeling: Bi-grams and Tri-grams\n",
    "4. Data Transformation: Corpus and Dictionary\n",
    "5. Base Model\n",
    "6. Hyper-parameter Tuning\n",
    "7. Final model\n",
    "8. Visualize Results\n",
    "\n",
    "** **\n",
    "\n",
    "For this tutorial, we’ll use the dataset of papers published in NeurIPS (NIPS) conference which is one of the most prestigious yearly events in the machine learning community. The CSV data file contains information on the different NeurIPS papers that were published from 1987 until 2016 (29 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods, and many more.\n",
    "\n",
    "<img src=\"https://s3.amazonaws.com/assets.datacamp.com/production/project_158/img/nips_logo.png\" alt=\"The logo of NIPS (Neural Information Processing Systems)\">\n",
    "\n",
    "Let’s start by looking at the content of the file\n",
    "\n",
    "** **\n",
    "#### Step 1: Loading Data\n",
    "** **\n",
    "\n",
    "For this tutorial, we’ll use the dataset of papers published in NIPS conference. The NIPS conference (Neural Information Processing Systems) is one of the most prestigious yearly events in the machine learning community. The CSV data file contains information on the different NIPS papers that were published from 1987 until 2016 (29 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods, and many more.\n",
    "\n",
    "Let’s start by looking at the content of the file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import zipfile\n",
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "# Open the zip file\n",
    "with zipfile.ZipFile(\"./data/NIPS Papers.zip\", \"r\") as zip_ref:\n",
    "    # Extract the file to a temporary directory\n",
    "    zip_ref.extractall(\"temp\")\n",
    "\n",
    "# Read the CSV file into a pandas DataFrame\n",
    "papers = pd.read_csv(\"temp/NIPS Papers/papers.csv\")\n",
    "\n",
    "# Print head\n",
    "papers.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 2: Data Cleaning\n",
    "** **\n",
    "\n",
    "Since the goal of this analysis is to perform topic modeling, we will solely focus on the text data from each paper, and drop other metadata columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Remove the columns\n",
    "papers = papers.drop(columns=['id', 'title', 'abstract', \n",
    "                              'event_type', 'pdf_name', 'year'], axis=1)\n",
    "\n",
    "# sample only 100 papers\n",
    "papers = papers.sample(100)\n",
    "\n",
    "# Print out the first rows of papers\n",
    "papers.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### Remove punctuation/lower casing\n",
    "\n",
    "Next, let’s perform a simple preprocessing on the content of paper_text column to make them more amenable for analysis, and reliable results. To do that, we’ll use a regular expression to remove any punctuation, and then lowercase the text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the regular expression library\n",
    "import re\n",
    "\n",
    "# Remove punctuation\n",
    "papers['paper_text_processed'] = papers['paper_text'].map(lambda x: re.sub('[,\\.!?]', '', x))\n",
    "\n",
    "# Convert the titles to lowercase\n",
    "papers['paper_text_processed'] = papers['paper_text_processed'].map(lambda x: x.lower())\n",
    "\n",
    "# Print out the first rows of papers\n",
    "papers['paper_text_processed'].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### Tokenize words and further clean-up text\n",
    "\n",
    "Let’s tokenize each sentence into a list of words, removing punctuations and unnecessary characters altogether."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gensim\n",
    "from gensim.utils import simple_preprocess\n",
    "\n",
    "def sent_to_words(sentences):\n",
    "    for sentence in sentences:\n",
    "        yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))  # deacc=True removes punctuations\n",
    "\n",
    "data = papers.paper_text_processed.values.tolist()\n",
    "data_words = list(sent_to_words(data))\n",
    "\n",
    "print(data_words[:1][0][:30])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 3: Phrase Modeling: Bigram and Trigram Models\n",
    "** **\n",
    "\n",
    "Bigrams are two words frequently occurring together in the document. Trigrams are 3 words frequently occurring. Some examples in our example are: 'back_bumper', 'oil_leakage', 'maryland_college_park' etc.\n",
    "\n",
    "Gensim's Phrases model can build and implement the bigrams, trigrams, quadgrams and more. The two important arguments to Phrases are min_count and threshold.\n",
    "\n",
    "*The higher the values of these param, the harder it is for words to be combined.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Build the bigram and trigram models\n",
    "bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.\n",
    "trigram = gensim.models.Phrases(bigram[data_words], threshold=100)  \n",
    "\n",
    "# Faster way to get a sentence clubbed as a trigram/bigram\n",
    "bigram_mod = gensim.models.phrases.Phraser(bigram)\n",
    "trigram_mod = gensim.models.phrases.Phraser(trigram)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Remove Stopwords, Make Bigrams and Lemmatize\n",
    "\n",
    "The phrase models are ready. Let’s define the functions to remove the stopwords, make trigrams and lemmatization and call them sequentially."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# NLTK Stop words\n",
    "import nltk\n",
    "nltk.download('stopwords')\n",
    "from nltk.corpus import stopwords\n",
    "\n",
    "stop_words = stopwords.words('english')\n",
    "stop_words.extend(['from', 'subject', 're', 'edu', 'use'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define functions for stopwords, bigrams, trigrams and lemmatization\n",
    "def remove_stopwords(texts):\n",
    "    return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]\n",
    "\n",
    "def make_bigrams(texts):\n",
    "    return [bigram_mod[doc] for doc in texts]\n",
    "\n",
    "def make_trigrams(texts):\n",
    "    return [trigram_mod[bigram_mod[doc]] for doc in texts]\n",
    "\n",
    "def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):\n",
    "    \"\"\"https://spacy.io/api/annotation\"\"\"\n",
    "    texts_out = []\n",
    "    for sent in texts:\n",
    "        doc = nlp(\" \".join(sent)) \n",
    "        texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])\n",
    "    return texts_out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's call the functions in order."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!python -m spacy download en_core_web_sm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import spacy\n",
    "\n",
    "# Remove Stop Words\n",
    "data_words_nostops = remove_stopwords(data_words)\n",
    "\n",
    "# Form Bigrams\n",
    "data_words_bigrams = make_bigrams(data_words_nostops)\n",
    "\n",
    "# Initialize spacy 'en' model, keeping only tagger component (for efficiency)\n",
    "nlp = spacy.load(\"en_core_web_sm\", disable=['parser', 'ner'])\n",
    "\n",
    "# Do lemmatization keeping only noun, adj, vb, adv\n",
    "data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])\n",
    "\n",
    "print(data_lemmatized[:1][0][:30])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 4: Data transformation: Corpus and Dictionary\n",
    "** **\n",
    "\n",
    "The two main inputs to the LDA topic model are the dictionary(id2word) and the corpus. Let’s create them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gensim.corpora as corpora\n",
    "\n",
    "# Create Dictionary\n",
    "id2word = corpora.Dictionary(data_lemmatized)\n",
    "\n",
    "# Create Corpus\n",
    "texts = data_lemmatized\n",
    "\n",
    "# Term Document Frequency\n",
    "corpus = [id2word.doc2bow(text) for text in texts]\n",
    "\n",
    "# View\n",
    "print(corpus[:1][0][:30])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 5: Base Model \n",
    "** **\n",
    "\n",
    "We have everything required to train the base LDA model. In addition to the corpus and dictionary, you need to provide the number of topics as well. Apart from that, alpha and eta are hyperparameters that affect sparsity of the topics. According to the Gensim docs, both defaults to 1.0/num_topics prior (we'll use default for the base model).\n",
    "\n",
    "chunksize controls how many documents are processed at a time in the training algorithm. Increasing chunksize will speed up training, at least as long as the chunk of documents easily fit into memory.\n",
    "\n",
    "passes controls how often we train the model on the entire corpus (set to 10). Another word for passes might be \"epochs\". iterations is somewhat technical, but essentially it controls how often we repeat a particular loop over each document. It is important to set the number of \"passes\" and \"iterations\" high enough."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Build LDA model\n",
    "lda_model = gensim.models.LdaMulticore(corpus=corpus,\n",
    "                                       id2word=id2word,\n",
    "                                       num_topics=10, \n",
    "                                       random_state=100,\n",
    "                                       chunksize=100,\n",
    "                                       passes=10,\n",
    "                                       per_word_topics=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "The above LDA model is built with 10 different topics where each topic is a combination of keywords and each keyword contributes a certain weightage to the topic.\n",
    "\n",
    "You can see the keywords for each topic and the weightage(importance) of each keyword using `lda_model.print_topics()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "# Print the Keyword in the 10 topics\n",
    "pprint(lda_model.print_topics())\n",
    "doc_lda = lda_model[corpus]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Compute Model Perplexity and Coherence Score\n",
    "\n",
    "Let's calculate the baseline coherence score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from gensim.models import CoherenceModel\n",
    "\n",
    "# Compute Coherence Score\n",
    "coherence_model_lda = CoherenceModel(model=lda_model, texts=data_lemmatized, dictionary=id2word, coherence='c_v')\n",
    "coherence_lda = coherence_model_lda.get_coherence()\n",
    "print('Coherence Score: ', coherence_lda)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 6: Hyperparameter tuning\n",
    "** **\n",
    "First, let's differentiate between model hyperparameters and model parameters :\n",
    "\n",
    "- `Model hyperparameters` can be thought of as settings for a machine learning algorithm that are tuned by the data scientist before training. Examples would be the number of trees in the random forest, or in our case, number of topics K\n",
    "\n",
    "- `Model parameters` can be thought of as what the model learns during training, such as the weights for each word in a given topic.\n",
    "\n",
    "Now that we have the baseline coherence score for the default LDA model, let's perform a series of sensitivity tests to help determine the following model hyperparameters: \n",
    "- Number of Topics (K)\n",
    "- Dirichlet hyperparameter alpha: Document-Topic Density\n",
    "- Dirichlet hyperparameter beta: Word-Topic Density\n",
    "\n",
    "We'll perform these tests in sequence, one parameter at a time by keeping others constant and run them over the two difference validation corpus sets. We'll use `C_v` as our choice of metric for performance comparison "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# supporting function\n",
    "def compute_coherence_values(corpus, dictionary, k, a, b):\n",
    "    \n",
    "    lda_model = gensim.models.LdaMulticore(corpus=corpus,\n",
    "                                           id2word=dictionary,\n",
    "                                           num_topics=k, \n",
    "                                           random_state=100,\n",
    "                                           chunksize=100,\n",
    "                                           passes=10,\n",
    "                                           alpha=a,\n",
    "                                           eta=b)\n",
    "    \n",
    "    coherence_model_lda = CoherenceModel(model=lda_model, texts=data_lemmatized, dictionary=id2word, coherence='c_v')\n",
    "    \n",
    "    return coherence_model_lda.get_coherence()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's call the function, and iterate it over the range of topics, alpha, and beta parameter values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import tqdm\n",
    "\n",
    "grid = {}\n",
    "grid['Validation_Set'] = {}\n",
    "\n",
    "# Topics range\n",
    "min_topics = 2\n",
    "max_topics = 11\n",
    "step_size = 1\n",
    "topics_range = range(min_topics, max_topics, step_size)\n",
    "\n",
    "# Alpha parameter\n",
    "alpha = list(np.arange(0.01, 1, 0.3))\n",
    "alpha.append('symmetric')\n",
    "alpha.append('asymmetric')\n",
    "\n",
    "# Beta parameter\n",
    "beta = list(np.arange(0.01, 1, 0.3))\n",
    "beta.append('symmetric')\n",
    "\n",
    "# Validation sets\n",
    "num_of_docs = len(corpus)\n",
    "corpus_sets = [gensim.utils.ClippedCorpus(corpus, int(num_of_docs*0.75)), \n",
    "               corpus]\n",
    "\n",
    "corpus_title = ['75% Corpus', '100% Corpus']\n",
    "\n",
    "model_results = {'Validation_Set': [],\n",
    "                 'Topics': [],\n",
    "                 'Alpha': [],\n",
    "                 'Beta': [],\n",
    "                 'Coherence': []\n",
    "                }\n",
    "\n",
    "# Can take a long time to run\n",
    "if 1 == 1:\n",
    "    pbar = tqdm.tqdm(total=(len(beta)*len(alpha)*len(topics_range)*len(corpus_title)))\n",
    "    \n",
    "    # iterate through validation corpuses\n",
    "    for i in range(len(corpus_sets)):\n",
    "        # iterate through number of topics\n",
    "        for k in topics_range:\n",
    "            # iterate through alpha values\n",
    "            for a in alpha:\n",
    "                # iterare through beta values\n",
    "                for b in beta:\n",
    "                    # get the coherence score for the given parameters\n",
    "                    cv = compute_coherence_values(corpus=corpus_sets[i], dictionary=id2word, \n",
    "                                                  k=k, a=a, b=b)\n",
    "                    # Save the model results\n",
    "                    model_results['Validation_Set'].append(corpus_title[i])\n",
    "                    model_results['Topics'].append(k)\n",
    "                    model_results['Alpha'].append(a)\n",
    "                    model_results['Beta'].append(b)\n",
    "                    model_results['Coherence'].append(cv)\n",
    "                    \n",
    "                    pbar.update(1)\n",
    "    pd.DataFrame(model_results).to_csv('./results/lda_tuning_results.csv', index=False)\n",
    "    pbar.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 7: Final Model\n",
    "** **\n",
    "\n",
    "Based on external evaluation (Code to be added from Excel based analysis), let's train the final model with parameters yielding highest coherence score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_topics = 8\n",
    "\n",
    "lda_model = gensim.models.LdaMulticore(corpus=corpus,\n",
    "                                           id2word=id2word,\n",
    "                                           num_topics=num_topics, \n",
    "                                           random_state=100,\n",
    "                                           chunksize=100,\n",
    "                                           passes=10,\n",
    "                                           alpha=0.01,\n",
    "                                           eta=0.9)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "# Print the Keyword in the 10 topics\n",
    "pprint(lda_model.print_topics())\n",
    "doc_lda = lda_model[corpus]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 8: Visualize Results\n",
    "** **"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pyLDAvis.gensim_models as gensimvis\n",
    "import pickle \n",
    "import pyLDAvis\n",
    "\n",
    "# Visualize the topics\n",
    "pyLDAvis.enable_notebook()\n",
    "\n",
    "LDAvis_data_filepath = os.path.join('./results/ldavis_tuned_'+str(num_topics))\n",
    "\n",
    "# # this is a bit time consuming - make the if statement True\n",
    "# # if you want to execute visualization prep yourself\n",
    "if 1 == 1:\n",
    "    LDAvis_prepared = gensimvis.prepare(lda_model, corpus, id2word)\n",
    "    with open(LDAvis_data_filepath, 'wb') as f:\n",
    "        pickle.dump(LDAvis_prepared, f)\n",
    "\n",
    "# load the pre-prepared pyLDAvis data from disk\n",
    "with open(LDAvis_data_filepath, 'rb') as f:\n",
    "    LDAvis_prepared = pickle.load(f)\n",
    "\n",
    "pyLDAvis.save_html(LDAvis_prepared, './results/ldavis_tuned_'+ str(num_topics) +'.html')\n",
    "\n",
    "LDAvis_prepared"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Closing Notes\n",
    "\n",
    "We started with understanding why evaluating the topic model is essential. Next, we reviewed existing methods and scratched the surface of topic coherence, along with the available coherence measures. Then we built a default LDA model using Gensim implementation to establish the baseline coherence score and reviewed practical ways to optimize the LDA hyperparameters.\n",
    "\n",
    "Hopefully, this article has managed to shed light on the underlying topic evaluation strategies, and intuitions behind it.\n",
    "\n",
    "** **\n",
    "#### References:\n",
    "1. http://qpleple.com/perplexity-to-evaluate-topic-models/\n",
    "2. https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020\n",
    "3. https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models.pdf\n",
    "4. https://github.com/mattilyra/pydataberlin-2017/blob/master/notebook/EvaluatingUnsupervisedModels.ipynb\n",
    "5. https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/\n",
    "6. http://svn.aksw.org/papers/2015/WSDM_Topic_Evaluation/public.pdf\n",
    "7. http://palmetto.aksw.org/palmetto-webapp/"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}


================================================
FILE: natural-language-processing/topic-modeling/Introduction to Topic Modeling.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction\n",
    "##### How to get started with topic modeling using LDA in Python\n",
    "** **\n",
    "Topic Models, in a nutshell, are a type of statistical language models used for uncovering hidden structure in a collection of texts. In a practical and more intuitively, you can think of it as a task of:\n",
    "\n",
    "- **Dimensionality Reduction**, where rather than representing a text T in its feature space as {Word_i: count(Word_i, T) for Word_i in Vocabulary}, you can represent it in a topic space as {Topic_i: Weight(Topic_i, T) for Topic_i in Topics}\n",
    "- **Unsupervised Learning**, where it can be compared to clustering, as in the case of clustering, the number of topics, like the number of clusters, is an output parameter. By doing topic modeling, we build clusters of words rather than clusters of texts. A text is thus a mixture of all the topics, each having a specific weight\n",
    "- **Tagging**, abstract “topics” that occur in a collection of documents that best represents the information in them.\n",
    "\n",
    "There are several existing algorithms you can use to perform the topic modeling. The most common of it are, Latent Semantic Analysis (LSA/LSI), Probabilistic Latent Semantic Analysis (pLSA), and Latent Dirichlet Allocation (LDA)\n",
    "\n",
    "In this tutorial, we’ll take a closer look at LDA, and implement our first topic model using the sklearn implementation in python 2.7\n",
    "\n",
    "### Theoretical Overview\n",
    "LDA is a generative probabilistic model that assumes each topic is a mixture over an underlying set of words, and each document is a mixture of over a set of topic probabilities.\n",
    "\n",
    "![LDA_Model](https://github.com/chdoig/pytexas2015-topic-modeling/blob/master/images/lda-4.png?raw=true)\n",
    "\n",
    "We can describe the generative process of LDA as, given the M number of documents, N number of words, and prior K number of topics, the model trains to output:\n",
    "\n",
    "- `psi`, the distribution of words for each topic K\n",
    "- `phi`, the distribution of topics for each document i\n",
    "\n",
    "#### Parameters of LDA\n",
    "\n",
    "- `Alpha parameter` is Dirichlet prior concentration parameter that represents document-topic density — with a higher alpha, documents are assumed to be made up of more topics and result in more specific topic distribution per document.\n",
    "- `Beta parameter` is the same prior concentration parameter that represents topic-word density — with high beta, topics are assumed to made of up most of the words and result in a more specific word distribution per topic.\n",
    "\n",
    "**To read more: https://towardsdatascience.com/end-to-end-topic-modeling-in-python-latent-dirichlet-allocation-lda-35ce4ed6b3e0**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "### LDA Implementation\n",
    "\n",
    "1. [Loading data](#load_data)\n",
    "2. [Data cleaning](#clean_data)\n",
    "3. [Exploratory analysis](#eda)\n",
    "4. [Prepare data for LDA analysis](#data_preparation)\n",
    "5. [LDA model training](#train_model)\n",
    "6. [Analyzing LDA model results](#results)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "For this tutorial, we’ll use the dataset of papers published in NeurIPS (NIPS) conference which is one of the most prestigious yearly events in the machine learning community. The CSV data file contains information on the different NeurIPS papers that were published from 1987 until 2016 (29 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods, and many more.\n",
    "\n",
    "<img src=\"https://s3.amazonaws.com/assets.datacamp.com/production/project_158/img/nips_logo.png\" alt=\"The logo of NIPS (Neural Information Processing Systems)\">\n",
    "\n",
    "Let’s start by looking at the content of the file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 1: Loading Data <a class=\"anchor\\\" id=\"load_data\"></a>\n",
    "** **\n",
    "For this tutorial, we’ll use the dataset of papers published in NeurIPS (NIPS) conference which is one of the most prestigious yearly events in the machine learning community. The CSV data file contains information on the different NeurIPS papers that were published from 1987 until 2016 (29 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods, and many more.\n",
    "\n",
    "Let’s start by looking at the content of the file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import zipfile\n",
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "# Open the zip file\n",
    "with zipfile.ZipFile(\"./data/NIPS Papers.zip\", \"r\") as zip_ref:\n",
    "    # Extract the file to a temporary directory\n",
    "    zip_ref.extractall(\"temp\")\n",
    "\n",
    "# Read the CSV file into a pandas DataFrame\n",
    "papers = pd.read_csv(\"temp/NIPS Papers/papers.csv\")\n",
    "\n",
    "# Print head\n",
    "papers.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 2: Data Cleaning <a class=\"anchor\\\" id=\"clean_data\"></a>\n",
    "** **\n",
    "\n",
    "Since the goal of this analysis is to perform topic modeling, let's focus only on the text data from each paper, and drop other metadata columns. Also, for the demonstration, we'll only look at 100 papers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Remove the columns\n",
    "papers = papers.drop(columns=['id', 'event_type', 'pdf_name'], axis=1).sample(100)\n",
    "\n",
    "# Print out the first rows of papers\n",
    "papers.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### Remove punctuation/lower casing\n",
    "\n",
    "Next, let’s perform a simple preprocessing on the content of `paper_text` column to make them more amenable for analysis, and reliable results. To do that, we’ll use a regular expression to remove any punctuation, and then lowercase the text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the regular expression library\n",
    "import re\n",
    "\n",
    "# Remove punctuation\n",
    "papers['paper_text_processed'] = \\\n",
    "papers['paper_text'].map(lambda x: re.sub('[,\\.!?]', '', x))\n",
    "\n",
    "# Convert the titles to lowercase\n",
    "papers['paper_text_processed'] = \\\n",
    "papers['paper_text_processed'].map(lambda x: x.lower())\n",
    "\n",
    "# Print out the first rows of papers\n",
    "papers['paper_text_processed'].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 3: Exploratory Analysis <a class=\"anchor\\\" id=\"eda\"></a>\n",
    "** **\n",
    "\n",
    "To verify whether the preprocessing, we’ll make a simple word cloud using the `wordcloud` package to get a visual representation of most common words. It is key to understanding the data and ensuring we are on the right track, and if any more preprocessing is necessary before training the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Import the wordcloud library\n",
    "from wordcloud import WordCloud\n",
    "\n",
    "# Join the different processed titles together.\n",
    "long_string = ','.join(list(papers['paper_text_processed'].values))\n",
    "\n",
    "# Create a WordCloud object\n",
    "wordcloud = WordCloud(background_color=\"white\", max_words=1000, contour_width=3, contour_color='steelblue')\n",
    "\n",
    "# Generate a word cloud\n",
    "wordcloud.generate(long_string)\n",
    "\n",
    "# Visualize the word cloud\n",
    "wordcloud.to_image()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 4: Prepare text for LDA analysis <a class=\"anchor\\\" id=\"data_preparation\"></a>\n",
    "** **\n",
    "\n",
    "Next, let’s work to transform the textual data in a format that will serve as an input for training LDA model. We start by tokenizing the text and removing stopwords. Next, we convert the tokenized object into a corpus and dictionary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gensim\n",
    "from gensim.utils import simple_preprocess\n",
    "import nltk\n",
    "nltk.download('stopwords')\n",
    "from nltk.corpus import stopwords\n",
    "\n",
    "stop_words = stopwords.words('english')\n",
    "stop_words.extend(['from', 'subject', 're', 'edu', 'use'])\n",
    "\n",
    "def sent_to_words(sentences):\n",
    "    for sentence in sentences:\n",
    "        # deacc=True removes punctuations\n",
    "        yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))\n",
    "\n",
    "def remove_stopwords(texts):\n",
    "    return [[word for word in simple_preprocess(str(doc)) \n",
    "             if word not in stop_words] for doc in texts]\n",
    "\n",
    "\n",
    "data = papers.paper_text_processed.values.tolist()\n",
    "data_words = list(sent_to_words(data))\n",
    "\n",
    "# remove stop words\n",
    "data_words = remove_stopwords(data_words)\n",
    "\n",
    "print(data_words[:1][0][:30])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import gensim.corpora as corpora\n",
    "\n",
    "# Create Dictionary\n",
    "id2word = corpora.Dictionary(data_words)\n",
    "\n",
    "# Create Corpus\n",
    "texts = data_words\n",
    "\n",
    "# Term Document Frequency\n",
    "corpus = [id2word.doc2bow(text) for text in texts]\n",
    "\n",
    "# View\n",
    "print(corpus[:1][0][:30])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 5: LDA model tranining <a class=\"anchor\\\" id=\"train_model\"></a>\n",
    "** **\n",
    "\n",
    "To keep things simple, we'll keep all the parameters to default except for inputting the number of topics. For this tutorial, we will build a model with 10 topics where each topic is a combination of keywords, and each keyword contributes a certain weightage to the topic."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pprint import pprint\n",
    "\n",
    "# number of topics\n",
    "num_topics = 10\n",
    "\n",
    "# Build LDA model\n",
    "lda_model = gensim.models.LdaMulticore(corpus=corpus,\n",
    "                                       id2word=id2word,\n",
    "                                       num_topics=num_topics)\n",
    "\n",
    "# Print the Keyword in the 10 topics\n",
    "pprint(lda_model.print_topics())\n",
    "doc_lda = lda_model[corpus]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Step 6: Analyzing our LDA model <a class=\"anchor\\\" id=\"results\"></a>\n",
    "** **\n",
    "\n",
    "Now that we have a trained model let’s visualize the topics for interpretability. To do so, we’ll use a popular visualization package, pyLDAvis which is designed to help interactively with:\n",
    "\n",
    "1. Better understanding and interpreting individual topics, and\n",
    "2. Better understanding the relationships between the topics.\n",
    "\n",
    "For (1), you can manually select each topic to view its top most frequent and/or “relevant” terms, using different values of the λ parameter. This can help when you’re trying to assign a human interpretable name or “meaning” to each topic.\n",
    "\n",
    "For (2), exploring the Intertopic Distance Plot can help you learn about how topics relate to each other, including potential higher-level structure between groups of topics."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pyLDAvis.gensim_models as gensimvis\n",
    "import pickle \n",
    "import pyLDAvis\n",
    "\n",
    "# Visualize the topics\n",
    "pyLDAvis.enable_notebook()\n",
    "\n",
    "LDAvis_data_filepath = os.path.join('./results/ldavis_prepared_'+str(num_topics))\n",
    "\n",
    "# # this is a bit time consuming - make the if statement True\n",
    "# # if you want to execute visualization prep yourself\n",
    "if 1 == 1:\n",
    "    LDAvis_prepared = gensimvis.prepare(lda_model, corpus, id2word)\n",
    "    with open(LDAvis_data_filepath, 'wb') as f:\n",
    "        pickle.dump(LDAvis_prepared, f)\n",
    "\n",
    "# load the pre-prepared pyLDAvis data from disk\n",
    "with open(LDAvis_data_filepath, 'rb') as f:\n",
    "    LDAvis_prepared = pickle.load(f)\n",
    "\n",
    "pyLDAvis.save_html(LDAvis_prepared, './results/ldavis_prepared_'+ str(num_topics) +'.html')\n",
    "\n",
    "LDAvis_prepared"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** **\n",
    "#### Closing Notes\n",
    "Machine learning has become increasingly popular over the past decade, and recent advances in computational availability have led to exponential growth to people looking for ways how new methods can be incorporated to advance the field of Natural Language Processing.\n",
    "\n",
    "Often, we treat topic models as black-box algorithms, but hopefully, this article addressed to shed light on the underlying math, and intuitions behind it, and high-level code to get you started with any textual data.\n",
    "\n",
    "In the next article, we’ll go one step deeper into understanding how you can evaluate the performance of topic models, tune its hyper-parameters to get more intuitive and reliable results.\n",
    "\n",
    "** **\n",
    "#### References:\n",
    "1. Topic model — Wikipedia. https://en.wikipedia.org/wiki/Topic_model\n",
    "2. Distributed Strategies for Topic Modeling. https://www.ideals.illinois.edu/bitstream/handle/2142/46405/ParallelTopicModels.pdf?sequence=2&isAllowed=y\n",
    "3. Topic Mapping — Software — Resources — Amaral Lab. https://amaral.northwestern.edu/resources/software/topic-mapping\n",
    "4. A Survey of Topic Modeling in Text Mining. https://thesai.org/Downloads/Volume6No1/Paper_21-A_Survey_of_Topic_Modeling_in_Text_Mining.pdf\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}


================================================
FILE: natural-language-processing/topic-modeling/pyproject.toml
================================================
[tool.poetry]
name = "topic-modeling"
version = "0.1.0"
description = ""
authors = ["Shashank Kapadia <shashank.kapadia@randstadusa.com>"]
readme = "README.md"
packages = [{include = "topic_modeling"}]

[tool.poetry.dependencies]
python = ">=3.9,<3.10"
jupyterlab = "^3.5.2"
pandas = "^1.5.2"
gensim = "^4.3.0"
nltk = "^3.8"
spacy = "^3.4.4"
tqdm = "^4.64.1"
pyldavis = "^3.3.1"
wordcloud = "^1.8.2.2"


[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"


================================================
FILE: natural-language-processing/topic-modeling/results/lda_tuning_results.csv
================================================
Validation_Set,Topics,Alpha,Beta,Coherence
75% Corpus,2,0.01,0.01,0.25978135607988706
75% Corpus,2,0.01,0.31,0.2668476722607104
75% Corpus,2,0.01,0.61,0.2776409400108895
75% Corpus,2,0.01,0.9099999999999999,0.2716233211418745
75% Corpus,2,0.01,symmetric,0.27332996032921053
75% Corpus,2,0.31,0.01,0.25978135607988706
75% Corpus,2,0.31,0.31,0.2668476722607104
75% Corpus,2,0.31,0.61,0.2776409400108895
75% Corpus,2,0.31,0.9099999999999999,0.2716233211418745
75% Corpus,2,0.31,symmetric,0.27332996032921053
75% Corpus,2,0.61,0.01,0.25978135607988706
75% Corpus,2,0.61,0.31,0.2668476722607104
75% Corpus,2,0.61,0.61,0.2776409400108895
75% Corpus,2,0.61,0.9099999999999999,0.2716233211418745
75% Corpus,2,0.61,symmetric,0.27332996032921053
75% Corpus,2,0.9099999999999999,0.01,0.25978135607988706
75% Corpus,2,0.9099999999999999,0.31,0.2668476722607104
75% Corpus,2,0.9099999999999999,0.61,0.27764094001088946
75% Corpus,2,0.9099999999999999,0.9099999999999999,0.2766757371950356
75% Corpus,2,0.9099999999999999,symmetric,0.27332996032921053
75% Corpus,2,symmetric,0.01,0.25978135607988706
75% Corpus,2,symmetric,0.31,0.2668476722607104
75% Corpus,2,symmetric,0.61,0.2776409400108895
75% Corpus,2,symmetric,0.9099999999999999,0.2716233211418745
75% Corpus,2,symmetric,symmetric,0.27332996032921053
75% Corpus,2,asymmetric,0.01,0.25978135607988706
75% Corpus,2,asymmetric,0.31,0.2668476722607104
75% Corpus,2,asymmetric,0.61,0.27764094001088957
75% Corpus,2,asymmetric,0.9099999999999999,0.27162332114187443
75% Corpus,2,asymmetric,symmetric,0.27332996032921053
75% Corpus,3,0.01,0.01,0.27839627006352724
75% Corpus,3,0.01,0.31,0.2714832672445246
75% Corpus,3,0.01,0.61,0.27129619317484943
75% Corpus,3,0.01,0.9099999999999999,0.26967265698428594
75% Corpus,3,0.01,symmetric,0.2710418286483521
75% Corpus,3,0.31,0.01,0.27839627006352724
75% Corpus,3,0.31,0.31,0.2714832672445246
75% Corpus,3,0.31,0.61,0.27129619317484943
75% Corpus,3,0.31,0.9099999999999999,0.26967265698428594
75% Corpus,3,0.31,symmetric,0.2741049024792329
75% Corpus,3,0.61,0.01,0.27839627006352724
75% Corpus,3,0.61,0.31,0.27351453271228904
75% Corpus,3,0.61,0.61,0.2730873971333492
75% Corpus,3,0.61,0.9099999999999999,0.26967265698428594
75% Corpus,3,0.61,symmetric,0.27410490247923297
75% Corpus,3,0.9099999999999999,0.01,0.27839627006352724
75% Corpus,3,0.9099999999999999,0.31,0.27351453271228904
75% Corpus,3,0.9099999999999999,0.61,0.2766291342273802
75% Corpus,3,0.9099999999999999,0.9099999999999999,0.26645454631910853
75% Corpus,3,0.9099999999999999,symmetric,0.2714832672445246
75% Corpus,3,symmetric,0.01,0.27839627006352724
75% Corpus,3,symmetric,0.31,0.2714832672445246
75% Corpus,3,symmetric,0.61,0.27129619317484943
75% Corpus,3,symmetric,0.9099999999999999,0.26967265698428594
75% Corpus,3,symmetric,symmetric,0.27410490247923297
75% Corpus,3,asymmetric,0.01,0.27839627006352724
75% Corpus,3,asymmetric,0.31,0.2761361679469974
75% Corpus,3,asymmetric,0.61,0.27335893705876174
75% Corpus,3,asymmetric,0.9099999999999999,0.2679744346749598
75% Corpus,3,asymmetric,symmetric,0.27410490247923297
75% Corpus,4,0.01,0.01,0.2668910500383095
75% Corpus,4,0.01,0.31,0.2838954443896148
75% Corpus,4,0.01,0.61,0.2865926544014952
75% Corpus,4,0.01,0.9099999999999999,0.27003887281216177
75% Corpus,4,0.01,symmetric,0.2796510251967742
75% Corpus,4,0.31,0.01,0.2668910500383095
75% Corpus,4,0.31,0.31,0.2838954443896148
75% Corpus,4,0.31,0.61,0.2865926544014952
75% Corpus,4,0.31,0.9099999999999999,0.2725567640010955
75% Corpus,4,0.31,symmetric,0.2806724093952981
75% Corpus,4,0.61,0.01,0.2668910500383095
75% Corpus,4,0.61,0.31,0.28505646918509386
75% Corpus,4,0.61,0.61,0.2892439806686463
75% Corpus,4,0.61,0.9099999999999999,0.2725567640010955
75% Corpus,4,0.61,symmetric,0.2806724093952981
75% Corpus,4,0.9099999999999999,0.01,0.2685576018605267
75% Corpus,4,0.9099999999999999,0.31,0.2860778533836178
75% Corpus,4,0.9099999999999999,0.61,0.2892439806686463
75% Corpus,4,0.9099999999999999,0.9099999999999999,0.2725567640010955
75% Corpus,4,0.9099999999999999,symmetric,0.27955372987186383
75% Corpus,4,symmetric,0.01,0.2668910500383095
75% Corpus,4,symmetric,0.31,0.2838954443896148
75% Corpus,4,symmetric,0.61,0.2865926544014952
75% Corpus,4,symmetric,0.9099999999999999,0.2725567640010955
75% Corpus,4,symmetric,symmetric,0.2806724093952981
75% Corpus,4,asymmetric,0.01,0.26812725545164134
75% Corpus,4,asymmetric,0.31,0.2838954443896148
75% Corpus,4,asymmetric,0.61,0.29144326581678964
75% Corpus,4,asymmetric,0.9099999999999999,0.2721780507981776
75% Corpus,4,asymmetric,symmetric,0.27675604929105146
75% Corpus,5,0.01,0.01,0.2921917497600287
75% Corpus,5,0.01,0.31,0.27552154453031585
75% Corpus,5,0.01,0.61,0.269308750481684
75% Corpus,5,0.01,0.9099999999999999,0.2709825760478901
75% Corpus,5,0.01,symmetric,0.2748624070876895
75% Corpus,5,0.31,0.01,0.2921917497600287
75% Corpus,5,0.31,0.31,0.2748594189290374
75% Corpus,5,0.31,0.61,0.27297615024851607
75% Corpus,5,0.31,0.9099999999999999,0.27487775415711974
75% Corpus,5,0.31,symmetric,0.27717279279720247
75% Corpus,5,0.61,0.01,0.29578845568370865
75% Corpus,5,0.61,0.31,0.27395015677623524
75% Corpus,5,0.61,0.61,0.27297615024851607
75% Corpus,5,0.61,0.9099999999999999,0.2740301275419981
75% Corpus,5,0.61,symmetric,0.2771727927972025
75% Corpus,5,0.9099999999999999,0.01,0.2957884556837086
75% Corpus,5,0.9099999999999999,0.31,0.27395015677623524
75% Corpus,5,0.9099999999999999,0.61,0.28404895953113735
75% Corpus,5,0.9099999999999999,0.9099999999999999,0.277640513968081
75% Corpus,5,0.9099999999999999,symmetric,0.27934536161569634
75% Corpus,5,symmetric,0.01,0.2921917497600287
75% Corpus,5,symmetric,0.31,0.2748594189290374
75% Corpus,5,symmetric,0.61,0.2689322892718954
75% Corpus,5,symmetric,0.9099999999999999,0.27487775415711974
75% Corpus,5,symmetric,symmetric,0.2748624070876895
75% Corpus,5,asymmetric,0.01,0.29578845568370865
75% Corpus,5,asymmetric,0.31,0.2757452278352188
75% Corpus,5,asymmetric,0.61,0.26239407325290853
75% Corpus,5,asymmetric,0.9099999999999999,0.2686874093590562
75% Corpus,5,asymmetric,symmetric,0.2770349759061833
75% Corpus,6,0.01,0.01,0.30284473516050225
75% Corpus,6,0.01,0.31,0.30249950909918444
75% Corpus,6,0.01,0.61,0.3121782769555992
75% Corpus,6,0.01,0.9099999999999999,0.35411824844325124
75% Corpus,6,0.01,symmetric,0.30436985837998276
75% Corpus,6,0.31,0.01,0.301390971353645
75% Corpus,6,0.31,0.31,0.3024063840590527
75% Corpus,6,0.31,0.61,0.31042451210080446
75% Corpus,6,0.31,0.9099999999999999,0.35411824844325124
75% Corpus,6,0.31,symmetric,0.30491985962114915
75% Corpus,6,0.61,0.01,0.30064456740988804
75% Corpus,6,0.61,0.31,0.30715638376430365
75% Corpus,6,0.61,0.61,0.31042451210080446
75% Corpus,6,0.61,0.9099999999999999,0.35418725691606756
75% Corpus,6,0.61,symmetric,0.3049198596211492
75% Corpus,6,0.9099999999999999,0.01,0.3013337842863722
75% Corpus,6,0.9099999999999999,0.31,0.3024679491620339
75% Corpus,6,0.9099999999999999,0.61,0.31042451210080446
75% Corpus,6,0.9099999999999999,0.9099999999999999,0.3541872569160675
75% Corpus,6,0.9099999999999999,symmetric,0.30491985962114915
75% Corpus,6,symmetric,0.01,0.30065930794388435
75% Corpus,6,symmetric,0.31,0.3024063840590527
75% Corpus,6,symmetric,0.61,0.31042451210080446
75% Corpus,6,symmetric,0.9099999999999999,0.35411824844325124
75% Corpus,6,symmetric,symmetric,0.30436985837998276
75% Corpus,6,asymmetric,0.01,0.2967291360253754
75% Corpus,6,asymmetric,0.31,0.302076431936051
75% Corpus,6,asymmetric,0.61,0.30847841491074096
75% Corpus,6,asymmetric,0.9099999999999999,0.35477042247541973
75% Corpus,6,asymmetric,symmetric,0.3073932906534706
75% Corpus,7,0.01,0.01,0.2811852190841381
75% Corpus,7,0.01,0.31,0.2920220131136521
75% Corpus,7,0.01,0.61,0.28852068336946435
75% Corpus,7,0.01,0.9099999999999999,0.316288019095773
75% Corpus,7,0.01,symmetric,0.2726152139083012
75% Corpus,7,0.31,0.01,0.2788837794017795
75% Corpus,7,0.31,0.31,0.2909961139203829
75% Corpus,7,0.31,0.61,0.28852068336946435
75% Corpus,7,0.31,0.9099999999999999,0.3129742202428836
75% Corpus,7,0.31,symmetric,0.2777182230257198
75% Corpus,7,0.61,0.01,0.28327368320570007
75% Corpus,7,0.61,0.31,0.28935990578371756
75% Corpus,7,0.61,0.61,0.28852068336946435
75% Corpus,7,0.61,0.9099999999999999,0.31405801487937557
75% Corpus,7,0.61,symmetric,0.2759279034494479
75% Corpus,7,0.9099999999999999,0.01,0.28327368320570007
75% Corpus,7,0.9099999999999999,0.31,0.28722285177113666
75% Corpus,7,0.9099999999999999,0.61,0.28966178578989427
75% Corpus,7,0.9099999999999999,0.9099999999999999,0.31405801487937557
75% Corpus,7,0.9099999999999999,symmetric,0.27734639862412164
75% Corpus,7,symmetric,0.01,0.2788837794017795
75% Corpus,7,symmetric,0.31,0.2909961139203829
75% Corpus,7,symmetric,0.61,0.28852068336946435
75% Corpus,7,symmetric,0.9099999999999999,0.31628801909577303
75% Corpus,7,symmetric,symmetric,0.2756697323965734
75% Corpus,7,asymmetric,0.01,0.2853642185426731
75% Corpus,7,asymmetric,0.31,0.2907271450321393
75% Corpus,7,asymmetric,0.61,0.29361676225955274
75% Corpus,7,asymmetric,0.9099999999999999,0.2882054578232691
75% Corpus,7,asymmetric,symmetric,0.2755194552200007
75% Corpus,8,0.01,0.01,0.2808202028308223
75% Corpus,8,0.01,0.31,0.29736349070880286
75% Corpus,8,0.01,0.61,0.2981078394239831
75% Corpus,8,0.01,0.9099999999999999,0.33233988889385435
75% Corpus,8,0.01,symmetric,0.28156849418007524
75% Corpus,8,0.31,0.01,0.28097129035113944
75% Corpus,8,0.31,0.31,0.29941241997917767
75% Corpus,8,0.31,0.61,0.2960462608237895
75% Corpus,8,0.31,0.9099999999999999,0.327937391019251
75% Corpus,8,0.31,symmetric,0.28207106831879114
75% Corpus,8,0.61,0.01,0.2809772515659067
75% Corpus,8,0.61,0.31,0.2993455174990556
75% Corpus,8,0.61,0.61,0.2946710626297879
75% Corpus,8,0.61,0.9099999999999999,0.32359289177614337
75% Corpus,8,0.61,symmetric,0.28138865001186986
75% Corpus,8,0.9099999999999999,0.01,0.27976485139485124
75% Corpus,8,0.9099999999999999,0.31,0.2956217410776679
75% Corpus,8,0.9099999999999999,0.61,0.28864637585951036
75% Corpus,8,0.9099999999999999,0.9099999999999999,0.3282635584644864
75% Corpus,8,0.9099999999999999,symmetric,0.28148737285248604
75% Corpus,8,symmetric,0.01,0.2808202028308223
75% Corpus,8,symmetric,0.31,0.29736349070880286
75% Corpus,8,symmetric,0.61,0.297949677831361
75% Corpus,8,symmetric,0.9099999999999999,0.3265139021008282
75% Corpus,8,symmetric,symmetric,0.2817931736933517
75% Corpus,8,asymmetric,0.01,0.27839412750067105
75% Corpus,8,asymmetric,0.31,0.296460251572274
75% Corpus,8,asymmetric,0.61,0.297148250775704
75% Corpus,8,asymmetric,0.9099999999999999,0.32649247107917795
75% Corpus,8,asymmetric,symmetric,0.2785059288493066
75% Corpus,9,0.01,0.01,0.30218586423505117
75% Corpus,9,0.01,0.31,0.3020235109956903
75% Corpus,9,0.01,0.61,0.4185691849535152
75% Corpus,9,0.01,0.9099999999999999,0.30527967183321003
75% Corpus,9,0.01,symmetric,0.3060635646496002
75% Corpus,9,0.31,0.01,0.30068508513688286
75% Corpus,9,0.31,0.31,0.3021506007873651
75% Corpus,9,0.31,0.61,0.40687439307467294
75% Corpus,9,0.31,0.9099999999999999,0.3073421317714173
75% Corpus,9,0.31,symmetric,0.3074145374333964
75% Corpus,9,0.61,0.01,0.3008820210254886
75% Corpus,9,0.61,0.31,0.30275011169684873
75% Corpus,9,0.61,0.61,0.41015152208333183
75% Corpus,9,0.61,0.9099999999999999,0.30255055512967155
75% Corpus,9,0.61,symmetric,0.30715762371551203
75% Corpus,9,0.9099999999999999,0.01,0.3022010046079812
75% Corpus,9,0.9099999999999999,0.31,0.30477890773318156
75% Corpus,9,0.9099999999999999,0.61,0.40433106836971433
75% Corpus,9,0.9099999999999999,0.9099999999999999,0.307644042466123
75% Corpus,9,0.9099999999999999,symmetric,0.30605307577755286
75% Corpus,9,symmetric,0.01,0.3013158303146456
75% Corpus,9,symmetric,0.31,0.304290454027783
75% Corpus,9,symmetric,0.61,0.4140952699790183
75% Corpus,9,symmetric,0.9099999999999999,0.30527967183321
75% Corpus,9,symmetric,symmetric,0.3046097002166359
75% Corpus,9,asymmetric,0.01,0.30129966466596664
75% Corpus,9,asymmetric,0.31,0.30429095996678135
75% Corpus,9,asymmetric,0.61,0.41788064240742073
75% Corpus,9,asymmetric,0.9099999999999999,0.3025975330524115
75% Corpus,9,asymmetric,symmetric,0.30669601144755215
75% Corpus,10,0.01,0.01,0.29670164537919275
75% Corpus,10,0.01,0.31,0.3047813348299382
75% Corpus,10,0.01,0.61,0.31606878233356517
75% Corpus,10,0.01,0.9099999999999999,0.3377296641607972
75% Corpus,10,0.01,symmetric,0.29319673176987077
75% Corpus,10,0.31,0.01,0.29805457831216586
75% Corpus,10,0.31,0.31,0.30369633038354127
75% Corpus,10,0.31,0.61,0.30998451194916454
75% Corpus,10,0.31,0.9099999999999999,0.34268251293470897
75% Corpus,10,0.31,symmetric,0.29319673176987077
75% Corpus,10,0.61,0.01,0.29386231031811644
75% Corpus,10,0.61,0.31,0.29541931052055126
75% Corpus,10,0.61,0.61,0.309970817294693
75% Corpus,10,0.61,0.9099999999999999,0.3453018084648031
75% Corpus,10,0.61,symmetric,0.29319673176987077
75% Corpus,10,0.9099999999999999,0.01,0.2945143416112511
75% Corpus,10,0.9099999999999999,0.31,0.34045282991315623
75% Corpus,10,0.9099999999999999,0.61,0.3126412466766651
75% Corpus,10,0.9099999999999999,0.9099999999999999,0.3435699386195771
75% Corpus,10,0.9099999999999999,symmetric,0.29319673176987077
75% Corpus,10,symmetric,0.01,0.29805457831216586
75% Corpus,10,symmetric,0.31,0.30478133482993813
75% Corpus,10,symmetric,0.61,0.31606878233356517
75% Corpus,10,symmetric,0.9099999999999999,0.33772966416079714
75% Corpus,10,symmetric,symmetric,0.29319673176987077
75% Corpus,10,asymmetric,0.01,0.29815109815940033
75% Corpus,10,asymmetric,0.31,0.35146410607200645
75% Corpus,10,asymmetric,0.61,0.3196615767839669
75% Corpus,10,asymmetric,0.9099999999999999,0.34012655333540415
75% Corpus,10,asymmetric,symmetric,0.2930959026114379
100% Corpus,2,0.01,0.01,0.26049496936796
100% Corpus,2,0.01,0.31,0.2631487596403128
100% Corpus,2,0.01,0.61,0.2670430149871908
100% Corpus,2,0.01,0.9099999999999999,0.2784737874245884
100% Corpus,2,0.01,symmetric,0.2659027879879802
100% Corpus,2,0.31,0.01,0.26049496936796
100% Corpus,2,0.31,0.31,0.2631487596403128
100% Corpus,2,0.31,0.61,0.2670430149871908
100% Corpus,2,0.31,0.9099999999999999,0.2784737874245884
100% Corpus,2,0.31,symmetric,0.2659027879879802
100% Corpus,2,0.61,0.01,0.26049496936796007
100% Corpus,2,0.61,0.31,0.2631487596403128
100% Corpus,2,0.61,0.61,0.2670430149871908
100% Corpus,2,0.61,0.9099999999999999,0.27847378742458834
100% Corpus,2,0.61,symmetric,0.2659027879879803
100% Corpus,2,0.9099999999999999,0.01,0.26049496936796007
100% Corpus,2,0.9099999999999999,0.31,0.2631487596403128
100% Corpus,2,0.9099999999999999,0.61,0.2670430149871908
100% Corpus,2,0.9099999999999999,0.9099999999999999,0.27847378742458834
100% Corpus,2,0.9099999999999999,symmetric,0.2659027879879803
100% Corpus,2,symmetric,0.01,0.26049496936796007
100% Corpus,2,symmetric,0.31,0.2631487596403128
100% Corpus,2,symmetric,0.61,0.2670430149871908
100% Corpus,2,symmetric,0.9099999999999999,0.27847378742458834
100% Corpus,2,symmetric,symmetric,0.2659027879879802
100% Corpus,2,asymmetric,0.01,0.26049496936796
100% Corpus,2,asymmetric,0.31,0.2638307040567668
100% Corpus,2,asymmetric,0.61,0.2670430149871908
100% Corpus,2,asymmetric,0.9099999999999999,0.2784737874245884
100% Corpus,2,asymmetric,symmetric,0.2659027879879803
100% Corpus,3,0.01,0.01,0.27711689254474037
100% Corpus,3,0.01,0.31,0.27802374195120044
100% Corpus,3,0.01,0.61,0.27993004576503216
100% Corpus,3,0.01,0.9099999999999999,0.28461303163756196
100% Corpus,3,0.01,symmetric,0.27802374195120044
100% Corpus,3,0.31,0.01,0.2803842121014907
100% Corpus,3,0.31,0.31,0.27802374195120044
100% Corpus,3,0.31,0.61,0.27993004576503216
100% Corpus,3,0.31,0.9099999999999999,0.2836001930609014
100% Corpus,3,0.31,symmetric,0.27802374195120044
100% Corpus,3,0.61,0.01,0.2803842121014907
100% Corpus,3,0.61,0.31,0.27802374195120044
100% Corpus,3,0.61,0.61,0.28095006391110916
100% Corpus,3,0.61,0.9099999999999999,0.2836495488056456
100% Corpus,3,0.61,symmetric,0.27802374195120044
100% Corpus,3,0.9099999999999999,0.01,0.28041475692228796
100% Corpus,3,0.9099999999999999,0.31,0.27802374195120044
100% Corpus,3,0.9099999999999999,0.61,0.28095006391110916
100% Corpus,3,0.9099999999999999,0.9099999999999999,0.2836495488056456
100% Corpus,3,0.9099999999999999,symmetric,0.27802374195120044
100% Corpus,3,symmetric,0.01,0.2803842121014907
100% Corpus,3,symmetric,0.31,0.27802374195120044
100% Corpus,3,symmetric,0.61,0.27993004576503216
100% Corpus,3,symmetric,0.9099999999999999,0.2836495488056456
100% Corpus,3,symmetric,symmetric,0.27802374195120044
100% Corpus,3,asymmetric,0.01,0.2803842121014907
100% Corpus,3,asymmetric,0.31,0.27802374195120044
100% Corpus,3,asymmetric,0.61,0.28095006391110916
100% Corpus,3,asymmetric,0.9099999999999999,0.2836495488056456
100% Corpus,3,asymmetric,symmetric,0.27802374195120044
100% Corpus,4,0.01,0.01,0.2809362113240368
100% Corpus,4,0.01,0.31,0.27835632925620063
100% Corpus,4,0.01,0.61,0.27555206267257515
100% Corpus,4,0.01,0.9099999999999999,0.28446330333455255
100% Corpus,4,0.01,symmetric,0.27835632925620063
100% Corpus,4,0.31,0.01,0.2809362113240368
100% Corpus,4,0.31,0.31,0.27835632925620063
100% Corpus,4,0.31,0.61,0.27548045699445267
100% Corpus,4,0.31,0.9099999999999999,0.28446330333455255
100% Corpus,4,0.31,symmetric,0.27835632925620063
100% Corpus,4,0.61,0.01,0.2809362113240368
100% Corpus,4,0.61,0.31,0.27835632925620063
100% Corpus,4,0.61,0.61,0.27966327435739713
100% Corpus,4,0.61,0.9099999999999999,0.28446330333455255
100% Corpus,4,0.61,symmetric,0.2782671534254213
100% Corpus,4,0.9099999999999999,0.01,0.28177729599690937
100% Corpus,4,0.9099999999999999,0.31,0.27835632925620063
100% Corpus,4,0.9099999999999999,0.61,0.28263222170754976
100% Corpus,4,0.9099999999999999,0.9099999999999999,0.28446330333455255
100% Corpus,4,0.9099999999999999,symmetric,0.27749145169031925
100% Corpus,4,symmetric,0.01,0.2809362113240368
100% Corpus,4,symmetric,0.31,0.27835632925620063
100% Corpus,4,symmetric,0.61,0.27548045699445267
100% Corpus,4,symmetric,0.9099999999999999,0.28446330333455255
100% Corpus,4,symmetric,symmetric,0.27835632925620063
100% Corpus,4,asymmetric,0.01,0.2809362113240368
100% Corpus,4,asymmetric,0.31,0.27835632925620063
100% Corpus,4,asymmetric,0.61,0.27335019355355233
100% Corpus,4,asymmetric,0.9099999999999999,0.28118168667280763
100% Corpus,4,asymmetric,symmetric,0.2782671534254213
100% Corpus,5,0.01,0.01,0.3055810333138698
100% Corpus,5,0.01,0.31,0.2921971083432925
100% Corpus,5,0.01,0.61,0.29351918649837283
100% Corpus,5,0.01,0.9099999999999999,0.31027882311552357
100% Corpus,5,0.01,symmetric,0.28798663645353445
100% Corpus,5,0.31,0.01,0.3055810333138698
100% Corpus,5,0.31,0.31,0.2919593387953088
100% Corpus,5,0.31,0.61,0.29351918649837283
100% Corpus,5,0.31,0.9099999999999999,0.3013857090688862
100% Corpus,5,0.31,symmetric,0.28798663645353445
100% Corpus,5,0.61,0.01,0.30778470383924467
100% Corpus,5,0.61,0.31,0.29467018241870313
100% Corpus,5,0.61,0.61,0.29688907251265284
100% Corpus,5,0.61,0.9099999999999999,0.3013857090688862
100% Corpus,5,0.61,symmetric,0.29224715357405645
100% Corpus,5,0.9099999999999999,0.01,0.30778470383924467
100% Corpus,5,0.9099999999999999,0.31,0.2891142546834146
100% Corpus,5,0.9099999999999999,0.61,0.29749568404827875
100% Corpus,5,0.9099999999999999,0.9099999999999999,0.30064353604119265
100% Corpus,5,0.9099999999999999,symmetric,0.2903640490356588
100% Corpus,5,symmetric,0.01,0.3055810333138698
100% Corpus,5,symmetric,0.31,0.2919593387953088
100% Corpus,5,symmetric,0.61,0.29351918649837283
100% Corpus,5,symmetric,0.9099999999999999,0.3013857090688862
100% Corpus,5,symmetric,symmetric,0.28798663645353445
100% Corpus,5,asymmetric,0.01,0.3025740990082661
100% Corpus,5,asymmetric,0.31,0.2891198159919257
100% Corpus,5,asymmetric,0.61,0.2931349532251887
100% Corpus,5,asymmetric,0.9099999999999999,0.30951060360405697
100% Corpus,5,asymmetric,symmetric,0.2903640490356588
100% Corpus,6,0.01,0.01,0.2958760348579074
100% Corpus,6,0.01,0.31,0.2941800616166869
100% Corpus,6,0.01,0.61,0.2971723188349142
100% Corpus,6,0.01,0.9099999999999999,0.30879954124012965
100% Corpus,6,0.01,symmetric,0.29805452202591254
100% Corpus,6,0.31,0.01,0.2969800967267447
100% Corpus,6,0.31,0.31,0.2951136969205841
100% Corpus,6,0.31,0.61,0.2971723188349142
100% Corpus,6,0.31,0.9099999999999999,0.30879954124012965
100% Corpus,6,0.31,symmetric,0.29820145833395656
100% Corpus,6,0.61,0.01,0.2969800967267447
100% Corpus,6,0.61,0.31,0.2951136969205841
100% Corpus,6,0.61,0.61,0.2977299345012703
100% Corpus,6,0.61,0.9099999999999999,0.3049789423287517
100% Corpus,6,0.61,symmetric,0.29863606640969337
100% Corpus,6,0.9099999999999999,0.01,0.29390398336601525
100% Corpus,6,0.9099999999999999,0.31,0.2951136969205841
100% Corpus,6,0.9099999999999999,0.61,0.2957487280234154
100% Corpus,6,0.9099999999999999,0.9099999999999999,0.3049789423287517
100% Corpus,6,0.9099999999999999,symmetric,0.29863606640969337
100% Corpus,6,symmetric,0.01,0.2955217784385554
100% Corpus,6,symmetric,0.31,0.2941800616166869
100% Corpus,6,symmetric,0.61,0.2971723188349142
100% Corpus,6,symmetric,0.9099999999999999,0.30879954124012965
100% Corpus,6,symmetric,symmetric,0.29820145833395656
100% Corpus,6,asymmetric,0.01,0.2949603603199485
100% Corpus,6,asymmetric,0.31,0.2951136969205841
100% Corpus,6,asymmetric,0.61,0.2977299345012703
100% Corpus,6,asymmetric,0.9099999999999999,0.30674292283526966
100% Corpus,6,asymmetric,symmetric,0.29823105460471666
100% Corpus,7,0.01,0.01,0.30073678510163443
100% Corpus,7,0.01,0.31,0.3093075297409759
100% Corpus,7,0.01,0.61,0.2985273077717666
100% Corpus,7,0.01,0.9099999999999999,0.28108775204553077
100% Corpus,7,0.01,symmetric,0.29593451694501705
100% Corpus,7,0.31,0.01,0.3021816720953532
100% Corpus,7,0.31,0.31,0.3136135696595105
100% Corpus,7,0.31,0.61,0.30525285357617626
100% Corpus,7,0.31,0.9099999999999999,0.2810941659084761
100% Corpus,7,0.31,symmetric,0.29621498771621146
100% Corpus,7,0.61,0.01,0.3021816720953532
100% Corpus,7,0.61,0.31,0.3136135696595104
100% Corpus,7,0.61,0.61,0.30936066130234136
100% Corpus,7,0.61,0.9099999999999999,0.28629245148004184
100% Corpus,7,0.61,symmetric,0.29731679986925613
100% Corpus,7,0.9099999999999999,0.01,0.3020919465269357
100% Corpus,7,0.9099999999999999,0.31,0.3136135696595105
100% Corpus,7,0.9099999999999999,0.61,0.3151197257139406
100% Corpus,7,0.9099999999999999,0.9099999999999999,0.288468301912423
100% Corpus,7,0.9099999999999999,symmetric,0.30014119040856624
100% Corpus,7,symmetric,0.01,0.29800123606081913
100% Corpus,7,symmetric,0.31,0.3093075297409759
100% Corpus,7,symmetric,0.61,0.29852730777176656
100% Corpus,7,symmetric,0.9099999999999999,0.2833650342764223
100% Corpus,7,symmetric,symmetric,0.29621498771621146
100% Corpus,7,asymmetric,0.01,0.3014826617961082
100% Corpus,7,asymmetric,0.31,0.30946569097108306
100% Corpus,7,asymmetric,0.61,0.29780622606183804
100% Corpus,7,asymmetric,0.9099999999999999,0.27769095312678477
100% Corpus,7,asymmetric,symmetric,0.29669074530709133
100% Corpus,8,0.01,0.01,0.27689325792791325
100% Corpus,8,0.01,0.31,0.28335268992548035
100% Corpus,8,0.01,0.61,0.29447751279748635
100% Corpus,8,0.01,0.9099999999999999,0.2933443347019431
100% Corpus,8,0.01,symmetric,0.2719332438180937
100% Corpus,8,0.31,0.01,0.2778826015547943
100% Corpus,8,0.31,0.31,0.2825893964563385
100% Corpus,8,0.31,0.61,0.2939449197205991
100% Corpus,8,0.31,0.9099999999999999,0.2933443347019431
100% Corpus,8,0.31,symmetric,0.2729328842173082
100% Corpus,8,0.61,0.01,0.2778826015547943
100% Corpus,8,0.61,0.31,0.28595528080117094
100% Corpus,8,0.61,0.61,0.2927074117432516
100% Corpus,8,0.61,0.9099999999999999,0.2872200703569201
100% Corpus,8,0.61,symmetric,0.2732337709535984
100% Corpus,8,0.9099999999999999,0.01,0.27786820825055075
100% Corpus,8,0.9099999999999999,0.31,0.28418596093474835
100% Corpus,8,0.9099999999999999,0.61,0.2900190376327208
100% Corpus,8,0.9099999999999999,0.9099999999999999,0.29123665567102414
100% Corpus,8,0.9099999999999999,symmetric,0.27323377095359846
100% Corpus,8,symmetric,0.01,0.27689325792791325
100% Corpus,8,symmetric,0.31,0.28420779222803205
100% Corpus,8,symmetric,0.61,0.2952855947321743
100% Corpus,8,symmetric,0.9099999999999999,0.2933443347019431
100% Corpus,8,symmetric,symmetric,0.2725123137958163
100% Corpus,8,asymmetric,0.01,0.2761789762953015
100% Corpus,8,asymmetric,0.31,0.2799130295170522
100% Corpus,8,asymmetric,0.61,0.2948589392161668
100% Corpus,8,asymmetric,0.9099999999999999,0.2812546037633434
100% Corpus,8,asymmetric,symmetric,0.2729328842173082
100% Corpus,9,0.01,0.01,0.3055895680888938
100% Corpus,9,0.01,0.31,0.30302760647148597
100% Corpus,9,0.01,0.61,0.2996054006558133
100% Corpus,9,0.01,0.9099999999999999,0.34404649042246366
100% Corpus,9,0.01,symmetric,0.3022567927575817
100% Corpus,9,0.31,0.01,0.3050065555696478
100% Corpus,9,0.31,0.31,0.30328591827752077
100% Corpus,9,0.31,0.61,0.29917768071256
100% Corpus,9,0.31,0.9099999999999999,0.34479247631815907
100% Corpus,9,0.31,symmetric,0.30430285315228395
100% Corpus,9,0.61,0.01,0.30426709798506846
100% Corpus,9,0.61,0.31,0.3018951218284658
100% Corpus,9,0.61,0.61,0.3001071122248435
100% Corpus,9,0.61,0.9099999999999999,0.34618320085549326
100% Corpus,9,0.61,symmetric,0.30433563585169243
100% Corpus,9,0.9099999999999999,0.01,0.3055723111286964
100% Corpus,9,0.9099999999999999,0.31,0.3030863932378131
100% Corpus,9,0.9099999999999999,0.61,0.29813786201586745
100% Corpus,9,0.9099999999999999,0.9099999999999999,0.34500630064915166
100% Corpus,9,0.9099999999999999,symmetric,0.3038428140388971
100% Corpus,9,symmetric,0.01,0.3023576157049908
100% Corpus,9,symmetric,0.31,0.30302760647148597
100% Corpus,9,symmetric,0.61,0.3003121132770352
100% Corpus,9,symmetric,0.9099999999999999,0.34214587751079284
100% Corpus,9,symmetric,symmetric,0.3022567927575817
100% Corpus,9,asymmetric,0.01,0.3057340479868098
100% Corpus,9,asymmetric,0.31,0.30328591827752077
100% Corpus,9,asymmetric,0.61,0.3011057586148252
100% Corpus,9,asymmetric,0.9099999999999999,0.350861967504945
100% Corpus,9,asymmetric,symmetric,0.30251782074437106
100% Corpus,10,0.01,0.01,0.283398008758761
100% Corpus,10,0.01,0.31,0.2751258030801618
100% Corpus,10,0.01,0.61,0.2915458869869755
100% Corpus,10,0.01,0.9099999999999999,0.2948656869892196
100% Corpus,10,0.01,symmetric,0.2777952167218004
100% Corpus,10,0.31,0.01,0.28125840155500537
100% Corpus,10,0.31,0.31,0.2762321145354264
100% Corpus,10,0.31,0.61,0.29037856739931606
100% Corpus,10,0.31,0.9099999999999999,0.29658208038434386
100% Corpus,10,0.31,symmetric,0.27688479666982796
100% Corpus,10,0.61,0.01,0.2803111695414585
100% Corpus,10,0.61,0.31,0.27818213083876936
100% Corpus,10,0.61,0.61,0.2894460065949672
100% Corpus,10,0.61,0.9099999999999999,0.29802196235401307
100% Corpus,10,0.61,symmetric,0.27614054331196536
100% Corpus,10,0.9099999999999999,0.01,0.2817458696637633
100% Corpus,10,0.9099999999999999,0.31,0.2783073583069227
100% Corpus,10,0.9099999999999999,0.61,0.2889530806223174
100% Corpus,10,0.9099999999999999,0.9099999999999999,0.2950248587541427
100% Corpus,10,0.9099999999999999,symmetric,0.27614054331196536
100% Corpus,10,symmetric,0.01,0.2821805736812627
100% Corpus,10,symmetric,0.31,0.2751258030801618
100% Corpus,10,symmetric,0.61,0.2911575680096669
100% Corpus,10,symmetric,0.9099999999999999,0.2948656869892196
100% Corpus,10,symmetric,symmetric,0.2777952167218004
100% Corpus,10,asymmetric,0.01,0.2818119554111271
100% Corpus,10,asymmetric,0.31,0.2804636061791579
100% Corpus,10,asymmetric,0.61,0.29020292429978206
100% Corpus,10,asymmetric,0.9099999999999999,0.2973427325310818
100% Corpus,10,asymmetric,symmetric,0.278615868791953


================================================
FILE: natural-language-processing/topic-modeling/results/ldavis_prepared_10.html
================================================

<link rel="stylesheet" type="text/css" href="https://cdn.jsdelivr.net/gh/bmabey/pyLDAvis@3.3.1/pyLDAvis/js/ldavis.v1.0.0.css">


<div id="ldavis_el86201113124030561323853923"></div>
<script type="text/javascript">

var ldavis_el86201113124030561323853923_data = {"mdsDat": {"x": [-0.002952153551530786, 0.000283760695792541, -0.001135316994279039, -0.004901320566344728, 0.00644201232051609, -0.006685906154726805, 0.0014429215406945395, -0.0021481507639479207, -0.0003718253875058945, 0.01002597886133201], "y": [0.002162515492831472, -0.00033019139169675475, 0.00949486999706966, -0.0073470490196353945, 0.001992721482861383, -0.001060144934148983, 0.0019741007036596517, 0.0010477106792275742, -0.0038734793798218528, -0.0040610536303467785], "topics": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "cluster": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "Freq": [18.214307570253176, 17.857910161571517, 12.86414483798897, 10.760496727249047, 10.161985806280155, 9.221580754822813, 7.205607836564056, 6.939513418301538, 5.951860984816929, 0.8225919021517903]}, "tinfo": {"Term": ["learning", "using", "data", "algorithm", "model", "set", "one", "training", "used", "first", "distribution", "number", "time", "function", "figure", "probability", "state", "given", "features", "models", "results", "two", "approach", "method", "problem", "neural", "error", "large", "order", "algorithms", "actor", "sts", "snippet", "appointments", "rsa", "week", "tours", "salesman", "actors", "appointment", "temperature", "macaque", "dorsal", "goodness", "serre", "ppr", "judged", "mallard", "streams", "tour", "roi", "polysensory", "waveforms", "bonn", "traveling", "era", "epv", "mcu", "invariance", "gross", "vdp", "snr", "zn", "cnn", "fmri", "ventral", "voxel", "mental", "starting", "topological", "action", "input", "point", "stimuli", "data", "order", "bias", "first", "average", "neurons", "neuron", "structure", "model", "tree", "two", "problem", "different", "using", "learning", "function", "space", "used", "neural", "algorithm", "see", "analysis", "figure", "one", "linear", "number", "set", "distribution", "given", "also", "random", "time", "state", "training", "matrix", "models", "results", "gs", "outbreaks", "rgbn", "neg", "svms", "sollich", "frequentist", "unrectified", "tiger", "cbinin", "baldwin", "sales", "varsample", "crystal", "tx", "ajk", "plain", "smo", "inadequate", "ltp", "blurred", "kotz", "qout", "disease", "discriminated", "sds", "fxi", "hinge", "heaviside", "oja", "updates", "beliefs", "excess", "scan", "postsynaptic", "plasticity", "patch", "yj", "bayes", "hyperplane", "yi", "risk", "st", "one", "probabilities", "search", "simple", "features", "units", "layer", "distribution", "eq", "problem", "agent", "data", "time", "results", "training", "consider", "function", "vector", "also", "model", "set", "learning", "approach", "number", "performance", "neural", "xi", "bayesian", "matrix", "models", "two", "feature", "method", "used", "algorithm", "using", "linear", "given", "probability", "based", "state", "figure", "tension", "suzuki", "tensions", "dornay", "muscles", "domay", "vref", "screening", "hogan", "uno", "muscle", "jerk", "xtj", "axon", "bhlmann", "daes", "mtll", "inertia", "morasso", "compliant", "kawato", "skeleton", "bandpass", "lgp", "unimodal", "cho", "nervous", "velocity", "abend", "zcn", "coverage", "gp", "lwpr", "marginal", "mlp", "motion", "gpr", "pulse", "virtual", "movements", "selection", "lasso", "primitives", "motor", "training", "models", "image", "unit", "data", "confidence", "local", "intervals", "method", "methods", "model", "et", "using", "learning", "given", "number", "error", "procedure", "used", "linear", "figure", "test", "function", "set", "network", "known", "distribution", "neural", "matrix", "input", "time", "space", "first", "based", "one", "problem", "two", "algorithm", "also", "results", "pab", "hosking", "trw", "truncated", "srd", "mincut", "wars", "aim", "im", "rounding", "unary", "komodakis", "wab", "player", "gop", "farima", "wavelet", "torr", "ecserpiedu", "ethernet", "sg", "neighbouring", "usg", "mpeg", "karnin", "hurst", "veksler", "ishikawa", "labelling", "khanna", "cm", "round", "fm", "descendant", "mrf", "traffic", "players", "expansion", "algorithm", "modeling", "energy", "problems", "models", "kernel", "best", "time", "algorithms", "state", "using", "information", "single", "feature", "based", "sample", "one", "distribution", "also", "model", "features", "used", "methods", "matrix", "data", "case", "learning", "results", "number", "given", "set", "function", "structure", "problem", "two", "probability", "approach", "training", "figure", "ach", "stratum", "joshua", "hasselmo", "barkai", "hippocampus", "suppression", "gyrus", "acetylcholine", "turing", "interneuron", "cholinergic", "berke", "rat", "schnell", "dentate", "cal", "multiset", "dominate", "cilz", "associations", "septum", "qx", "interfering", "edi", "pyramidal", "orlitsky", "elisabeth", "rgm", "sauer", "rnnat", "rnp", "entorhinal", "regret", "symbols", "hme", "inhibitory", "spikes", "winner", "afferent", "estimators", "sigmoidal", "network", "spike", "estimator", "region", "loo", "neurons", "set", "probability", "batch", "noise", "model", "optimal", "figure", "distribution", "system", "however", "two", "using", "methods", "approach", "object", "algorithm", "learning", "function", "data", "training", "since", "time", "results", "let", "networks", "given", "different", "used", "based", "neural", "problem", "one", "models", "method", "number", "first", "also", "eigenfunctions", "singla", "mlns", "ax", "domingos", "lifted", "eigenfunction", "mittal", "gcfove", "vl", "setineq", "gcfvoe", "compilation", "tuples", "ineq", "vly", "substitutions", "disjunction", "ptp", "constraint", "webkb", "alchemy", "hauz", "cseiitdacin", "decomposer", "linsker", "mln", "delhi", "surround", "fs", "formula", "predicate", "groundings", "tuple", "pulse", "dc", "displays", "centre", "balls", "motion", "world", "xj", "agent", "beliefs", "time", "constraints", "state", "clustering", "new", "shown", "control", "number", "two", "model", "linear", "second", "figure", "data", "action", "set", "one", "therefore", "inference", "given", "learning", "algorithm", "also", "show", "using", "error", "function", "first", "performance", "information", "algorithms", "problem", "models", "matrix", "results", "based", "used", "distribution", "pmca", "murata", "repetitively", "mca", "permuted", "alocal", "fresh", "gauss", "minor", "xca", "chambers", "nec", "hlm", "manipulating", "amf", "adequate", "schnabel", "bottou", "tsypkin", "yoshitatsu", "amari", "sigmoid", "br", "dropout", "logprobability", "lmd", "hollow", "wt", "maxkurt", "sami", "symbolic", "displays", "grammar", "boosting", "expressions", "zt", "ica", "phase", "ir", "separable", "learning", "batch", "non", "error", "matrix", "sum", "fig", "algorithm", "small", "solution", "model", "linear", "first", "map", "convergence", "using", "optimal", "consider", "data", "given", "training", "see", "mean", "neural", "set", "one", "figure", "number", "kernel", "distribution", "results", "models", "log", "network", "time", "two", "space", "analysis", "used", "different", "function", "problem", "identied", "vo", "srd", "eggs", "lrd", "identication", "participated", "urn", "statue", "token", "mother", "wavelet", "traffic", "vps", "developmental", "lobs", "matsuda", "eff", "photographs", "participants", "dene", "birds", "dinners", "gist", "blog", "deg", "whitening", "sigcomm", "notepad", "notepads", "lmica", "video", "object", "identifiability", "lasso", "replica", "tokens", "admixture", "balls", "discovery", "categorization", "learner", "lifted", "bci", "pseudo", "order", "tracklets", "condition", "screening", "word", "constraint", "objects", "set", "learning", "topic", "points", "based", "used", "problem", "data", "map", "state", "two", "using", "model", "log", "case", "models", "likelihood", "also", "control", "number", "function", "results", "figure", "matrix", "features", "let", "given", "one", "algorithm", "training", "time", "different", "linear", "random", "distribution", "method", "sgi", "hull", "ugi", "crm", "pumadyn", "griffin", "nrms", "kiji", "spambase", "hkl", "magic", "nrm", "wv", "unnormalized", "ngm", "pol", "slice", "hilbertian", "twonorm", "kgap", "abalone", "dag", "ygi", "fv", "covariate", "mushrooms", "sampler", "hulls", "reversible", "bg", "predictive", "kernel", "xg", "sngp", "embedded", "atoms", "losses", "kernels", "calibrated", "algorithm", "osi", "set", "large", "framework", "random", "loss", "using", "learning", "size", "xt", "training", "algorithms", "also", "model", "number", "space", "convex", "function", "shows", "problem", "value", "however", "non", "data", "given", "time", "models", "used", "distribution", "one", "error", "state", "results", "figure", "based", "two", "neural", "entorhinal", "radiatum", "calcium", "perforant", "degraded", "preparations", "eejj", "peking", "cholinergic", "alzheimer", "acute", "bower", "hippocampus", "berke", "edi", "unrobust", "collaterals", "hagan", "associations", "potassium", "stratum", "hetero", "sga", "err", "ydxi", "biophysical", "laminar", "wash", "eric", "sorting", "sort", "acetylcholine", "hippocampal", "xu", "morris", "modulation", "surrogate", "gyrus", "schnell", "cal", "stdp", "robust", "ranking", "ar", "suppression", "learning", "patterns", "risk", "estimator", "ii", "calibrated", "kl", "rules", "ln", "networks", "region", "using", "loss", "first", "probability", "assumption", "convex", "one", "used", "approach", "features", "data", "algorithm", "training", "xi", "eq", "distributions", "model", "method", "distribution", "theorem", "set", "state", "function", "figure", "results", "neural", "input", "number", "given", "time", "large", "two", "problem", "models", "error", "algorithms", "network"], "Freq": [1195.0, 961.0, 1407.0, 873.0, 1431.0, 966.0, 823.0, 635.0, 618.0, 487.0, 679.0, 729.0, 758.0, 728.0, 610.0, 411.0, 500.0, 611.0, 457.0, 668.0, 540.0, 673.0, 455.0, 448.0, 642.0, 506.0, 427.0, 361.0, 372.0, 396.0, 20.6440106814847, 14.768286947470074, 6.817414786832448, 4.068094479611402, 19.070452390370296, 2.5917048001291505, 4.053280798666097, 3.6345950399536346, 4.1080120704424665, 1.4166834530247012, 6.989591819426287, 4.75570853219193, 12.718844534601768, 4.373582604653636, 1.4093784677042898, 2.0513466679080645, 1.3838053019546595, 1.7078679251365456, 7.0237989018992035, 5.694458273006332, 3.844971515906464, 1.0257849684044946, 4.0728345806845345, 1.3166072303553042, 1.2812401726626246, 1.937308014590823, 1.7087811611396535, 22.405998296131944, 14.002070205704758, 0.9871829940856499, 19.166508569217115, 11.526038555752448, 11.75102981676984, 11.669363950480722, 35.50956463537211, 12.605446801444863, 16.87543745049325, 6.865988109744369, 14.32324522779807, 14.018535534894234, 88.52457137934978, 91.40424190841937, 71.00519632844177, 22.762123627362975, 303.7359275582615, 92.37437717849379, 51.07185958564441, 115.18856398417694, 63.290231532848146, 66.40454744613898, 50.05176985484777, 91.22926649202688, 281.64954681814305, 44.66785755992362, 144.86701172508788, 138.810428145265, 92.73726918229085, 193.82779464514945, 228.0426138970156, 150.1613757384552, 96.23894400082344, 129.31418915352324, 107.092166838965, 164.51933536862776, 80.0199014721712, 82.82898292037058, 119.50766055176344, 148.90194174586745, 101.10699079204808, 128.01352443602946, 153.3930287871279, 118.16850933089518, 109.74651270817321, 103.46049295061538, 85.58888021564607, 115.53420109329122, 91.07990380992914, 99.82394686544126, 90.22281120926081, 94.09620768472892, 89.07906781913596, 21.53911159445337, 7.9914057530443445, 4.623563881405115, 3.9920644716157674, 14.014363943431178, 2.753546190230405, 13.44061879065286, 3.5411117230029547, 10.652435327929464, 2.6545271762357996, 2.542803983980876, 2.8991838712257296, 2.554636396531438, 0.9408850037424669, 7.5196303420538575, 1.178841714606221, 2.5121036703361757, 1.168303970014125, 1.5600006939862925, 1.781131957884527, 0.6190333814001633, 1.2088008061372961, 3.3967862632858012, 9.813091042190255, 0.9246950164516736, 10.087221973569472, 8.708252058367062, 4.170951090271558, 2.069233856680642, 5.713262168187321, 28.90010417843369, 24.906396480082442, 10.763526895688756, 12.808577735569735, 13.003404833614786, 11.286474216307502, 13.559470400790978, 25.260356369318114, 13.279193152390759, 10.442121186669734, 48.300212485610245, 30.61373064144342, 34.03480479892645, 201.0155713546716, 32.480802030031555, 61.630922123720296, 58.23010243208127, 108.26370258351082, 47.476246087988386, 40.76916688048589, 151.97294371972356, 60.13232294206473, 139.32586832599782, 58.78972232523754, 268.94734107567143, 156.28852796908123, 116.70976291803635, 133.46956636813164, 67.50358908580023, 146.8605673817312, 76.93095020515628, 115.49138906235633, 247.37565051984416, 175.45002677365994, 207.69407906794112, 94.44652230817192, 135.56338062516897, 79.99944749798819, 99.42386258024368, 65.02453403877023, 60.12877333630507, 98.47957304376618, 118.02019995288707, 118.29075354721162, 71.44464744553017, 85.4400490834587, 104.7757523430453, 129.98313603864298, 130.60321251500363, 90.62803391221995, 97.63096237530499, 78.86417312289166, 81.82916542224248, 82.13206196691966, 87.5615804748321, 7.758201749923136, 3.2930108526257924, 1.9363811794289367, 1.269591004503053, 3.1046037424776416, 4.2436040164042845, 1.1793756396811061, 15.177509442577083, 2.7058338085683515, 7.5130801580775035, 12.681865681883318, 2.66470953478903, 1.4170664870479703, 2.8402481353942646, 1.3811144390090002, 2.3244827867639137, 1.668216533473952, 1.1303763557516302, 1.4025206913001185, 1.1533473667791363, 3.4097769620804455, 1.3921487189371176, 1.3134465398985713, 12.443806437223294, 1.6524267725945714, 1.3563033541638905, 2.4461386875761035, 18.803553785899556, 0.8061198482143254, 0.7764546216487868, 6.317385591517614, 5.839586947020346, 5.084101112518192, 25.730259362093815, 16.389577552747422, 39.34754601972535, 9.881636522130666, 8.390950824868938, 7.46879172664151, 15.513686689349738, 47.77525879288375, 19.032459521352234, 14.14929357714718, 14.362076963451774, 116.45431719721158, 119.91579899034537, 61.507882386738615, 26.74677662112406, 219.7467858238089, 17.95084591870384, 59.797590674559906, 17.907868288827814, 78.50192987338421, 72.99742018791045, 197.89980753555548, 45.03993386186432, 137.96072509831498, 164.10269855972277, 93.21240803071132, 106.89733952941677, 69.40642916049383, 26.657689718734577, 90.92379951095997, 78.70004962356077, 88.61872691558379, 43.99208827553768, 96.92274150469537, 117.28477034012718, 67.22885284343812, 34.546761658386416, 86.43830117775701, 69.9126603829269, 70.42843954512752, 54.543779012989745, 86.80522360664439, 59.477700689712655, 64.12292299821499, 63.22530785163538, 82.6191252707391, 72.74893573551279, 70.66713803931613, 78.16752076360555, 63.48541440450507, 60.681441554694, 2.3795347200032295, 0.8426863924587441, 6.2164806046057315, 17.287012480073997, 1.5542623624067713, 4.128648011675924, 1.579084092677598, 7.375743383691165, 10.995697468684115, 2.89834004470483, 3.4190792647149633, 1.027141056785191, 3.9237958981142613, 19.912090042364785, 2.3214504881650373, 4.4240101231695474, 17.50599932583188, 1.2453706276083947, 0.7364765830245493, 0.7229697579616675, 7.569728068648008, 1.94249784513201, 3.8726581528071407, 1.6752960508923502, 0.8768613569931746, 0.693206186567833, 1.184492122366873, 0.9365726633565068, 10.374441200231999, 0.4510073530675216, 6.383678377507041, 8.57206926014288, 12.569695909106743, 2.5265283620945493, 13.942779710099892, 10.928388881532468, 11.191064447042846, 9.570178244678884, 130.25411023570987, 22.71725824996586, 22.263408093904125, 52.98376117041707, 95.60269959885694, 56.644671385891186, 38.642469970045255, 101.90604852667356, 58.07847939291381, 69.32005727785284, 116.52270338075704, 58.132269378132996, 39.475230639055624, 47.73751401113219, 62.89702904599368, 33.178983741330605, 94.47651729093204, 80.24406036427068, 69.37966654764047, 138.8052502654046, 58.27927599700136, 72.30858698963699, 54.309966453200474, 61.62757174254049, 123.64355086566111, 44.00860406180548, 105.75274921457192, 61.226991078503396, 74.78252811820438, 66.50968413830638, 83.41997331453886, 70.39561237887729, 47.21825236606502, 60.085131401789994, 61.21907275580616, 48.454082396080906, 49.703988496150174, 51.81402808765735, 50.780840563823304, 1.5713531418096451, 2.5438082460715217, 1.5467793604367406, 8.620739021108463, 2.7015541947132444, 2.3962381394705052, 5.416039590626053, 1.652997591446023, 2.036326085394666, 6.869154534584633, 0.9313803090510414, 8.78969744999268, 1.301499262344483, 1.091330477869197, 1.9633934256924699, 1.5341926420857712, 5.069188002756316, 1.9607931845414592, 1.066282277134632, 1.0801145561557495, 0.8215492889856338, 0.8294248014781628, 1.472703796624784, 0.8130777305116559, 1.2133409475766612, 2.0781386144642466, 1.6734179370698703, 0.634333185187615, 0.6155698658586253, 1.899565051489764, 3.5758760065566233, 3.20252240632034, 2.628721253730378, 10.986525682571084, 4.501194877999106, 7.691450538828819, 13.864904165754742, 15.0395678482386, 9.230727492587357, 11.915927649453872, 24.78404081076119, 8.240920156025663, 75.162472335404, 32.83317467350116, 35.358810106296296, 20.863085993567363, 6.669917204870094, 41.67921756470765, 127.95343073335425, 59.83780698983501, 16.81073212209051, 45.37089185014776, 153.37859105820803, 42.403563597815335, 75.48797035145334, 81.75150389301376, 33.623989108724324, 47.89901722914676, 76.67395832001651, 100.80235895953503, 52.92404024759112, 54.548790581205694, 25.047754854230714, 86.68875832229119, 105.18926275505156, 73.6180610427364, 117.0553847653627, 64.54756575170873, 39.87244061367769, 70.09873830866383, 55.75427372727449, 43.869537556021776, 35.73855510566107, 58.38995296376592, 44.74022097762267, 58.00387968688623, 49.61632802250833, 50.91853496879445, 57.667352847629665, 65.23057384298514, 57.16976651077156, 47.296477108070604, 56.949834692893326, 47.638193349771484, 46.86359714019988, 4.026358356452829, 2.070616793567729, 1.50201025397041, 3.2463132106246104, 1.45847010621715, 14.027036725503686, 2.823235212616134, 0.9444452436883586, 2.7454321843489673, 3.706480777931948, 4.226126569380088, 0.8825738934832695, 1.8086696537899047, 3.3856818405443785, 1.3354132453601848, 1.1073843143972122, 1.3185375096101326, 0.666085561221687, 1.943663352503726, 40.463622030535824, 1.973770825311503, 1.7249241045783272, 0.637342874086355, 0.6381035131110536, 3.2773411282525813, 8.334742712033236, 2.9582968554527302, 1.0393543474222497, 3.527122004503372, 1.8317156604026381, 9.332897327751985, 3.7549761240742274, 1.6364255135941492, 5.94563845232203, 6.767938444190835, 5.366513218004431, 14.342054928176413, 3.8556577361045092, 10.242322403405552, 25.929627088295195, 20.72074667258608, 19.784120791977983, 36.16253115661904, 13.859304318805922, 90.05624808406566, 29.719614849303596, 60.03993130511508, 30.98703415635482, 44.812457818422715, 35.23035241688997, 25.98222985470263, 76.24449855755894, 71.27630730229968, 124.56148863541425, 56.585888536594105, 28.621291706320278, 63.29198686050176, 119.42446933639222, 40.06547312726031, 87.77092710082353, 76.54371106824478, 26.17433805332188, 35.386839614245226, 58.64564234459521, 92.31041836276736, 72.94144164522359, 52.27611300224872, 37.58839981416754, 71.43932932272794, 43.03671653726441, 58.726497678790444, 46.12525770584434, 38.85653573367063, 40.9045166228196, 39.431236982953735, 48.89170838933965, 48.80934183160645, 43.58420845049236, 42.682393116589594, 39.08707775930033, 39.41738978792289, 39.30585661590717, 2.5489232581353636, 1.3957910118129622, 0.4953265560078979, 2.2402834264741696, 0.4615639483854566, 0.7641910085386956, 0.9472958772207817, 0.750571677983846, 3.304469495925937, 5.818730821257241, 0.5936219353398935, 0.5765401135579997, 4.247989814384577, 0.2793635234796691, 0.4164587432663953, 1.0113191999632902, 0.4168063017970412, 2.643011122880891, 0.40900351011455477, 0.8239530416913212, 1.2672979772569894, 2.5539028174996856, 8.748968670621224, 0.6942142840346688, 0.6892103220678014, 0.6868985623471375, 0.5409193370295281, 9.84369370438062, 3.03285329284687, 0.2645510544566984, 3.22648711078631, 10.504639027586782, 6.19632121822615, 12.421020877656506, 9.187493675554343, 12.593991285982876, 5.101979960030689, 7.970869904669298, 2.026008577893751, 6.086686157806131, 116.13245880578458, 11.461355321205378, 35.87645580845802, 43.16239394102681, 50.93483378550415, 19.721412969305746, 27.287642489677093, 78.44123406040491, 23.394794582512876, 21.54528967171992, 114.86846398861559, 48.19130512873678, 45.07537133044871, 22.282273980606398, 20.48302557505206, 72.64550310867666, 28.138068188832353, 27.093987245062248, 90.99987133196812, 47.355043890469815, 48.555714787292686, 31.24234587569963, 27.461752797135055, 40.52108100120892, 64.71339739285496, 57.1840522276756, 45.5369723886456, 50.78175018884447, 31.285172562204178, 47.18758061387109, 40.44462681417407, 46.333298113142774, 31.693448815244334, 35.56972866468806, 49.08327191387223, 44.25360404772296, 33.595069114404446, 30.224660691436174, 34.782453567882314, 30.492518893723222, 33.46225420038203, 31.883750070935648, 0.9570146179845763, 1.8827661250557517, 0.9367498961789786, 3.3995638690456436, 1.0754084147550114, 4.214935571691202, 0.4271763725160795, 3.5773150332061547, 0.5599018369569364, 4.112495414799079, 0.7163003917450645, 10.061343666638088, 7.7791127245924265, 1.095832315852087, 0.5480709235317299, 0.5362742305015584, 1.087161314850833, 1.1790772959150944, 0.664189780539765, 8.440325502725923, 0.3927089830384294, 1.7171090331707082, 0.38442369231890255, 0.3882801932699303, 0.9126456116580868, 0.8945277754063324, 0.7812212960565919, 0.3966771731682275, 1.0169330393555631, 1.1504645746593696, 5.600401400878253, 14.887438756042409, 22.144405802276385, 7.550232048525477, 10.778193189680366, 6.4668636374258535, 5.723325186927127, 2.3721227036287345, 7.185363076465937, 6.312590530289151, 2.2451176912373163, 4.843017046088878, 6.427029034063596, 6.58741710224275, 4.14777005021187, 37.96276403288741, 2.7872888925596877, 17.407518066554434, 5.82151811703446, 9.964252667643242, 16.78987169178046, 11.525985745435246, 75.400218290589, 88.38800484758337, 13.587315955325526, 21.63051062103648, 40.29344325870188, 48.79392787197593, 50.11541245336035, 94.2748024092492, 20.58266080180068, 40.109973333449474, 49.502542489101415, 65.17093720416756, 88.61661955609092, 31.076959301446124, 27.835753892962543, 47.662017629896944, 22.127871791376748, 41.15296603477288, 17.825406295886438, 49.91964198705037, 49.77171633328822, 39.45156972302097, 42.712031182805234, 37.824523266805535, 34.50279021837123, 30.2139972896932, 41.31076000211836, 50.273650311391165, 51.993752396734656, 41.33724670251671, 44.16064092054756, 29.31221135130375, 32.8977438649659, 29.647985306623767, 32.68652160058419, 29.482815783717232, 3.7532063294964066, 4.099598613748449, 1.725718366705504, 2.199076292712959, 1.8569269336214778, 0.9965387586161965, 1.6250332570868404, 0.47404167882462533, 0.6395661025429805, 2.248172183448534, 0.46762434301187644, 0.777005580862535, 1.2578698916046513, 0.4523934392853069, 0.7612537998640414, 2.4266367816357612, 9.484953791915387, 1.207091264931047, 0.594667071653781, 0.890582538158348, 0.4331385312563057, 8.246145501172848, 1.1560648589794345, 2.1207109831371045, 3.0988049343082325, 0.4127949184432781, 10.344089501665403, 0.4209130328518001, 0.4047523177194824, 0.8332728626819088, 7.759941989452648, 47.52491505094083, 2.918198028617337, 4.033109600947796, 6.508291259400894, 4.123242263076617, 6.386654838624866, 15.3351318832135, 5.555249563088389, 73.78108615835714, 3.5527627260987424, 74.66109117612253, 31.890936187221097, 20.061764670894107, 33.72945843004252, 15.408650748101644, 63.832919625317196, 75.43695023703616, 26.083579861368996, 17.120643519357035, 42.35944235048196, 29.276413295327178, 38.089275628156145, 76.35712853507594, 45.41986725787781, 30.302814176548665, 15.588898767139868, 43.82677707603109, 20.32854193341812, 39.37045338883722, 19.72051046360699, 25.599714914072667, 24.351602708091136, 60.030641986362795, 34.95219489155247, 39.78848141546454, 36.73586902803433, 34.58347100824438, 36.22351811637676, 40.30556451803163, 27.097296016758225, 29.394076201744923, 30.433251481299283, 32.213263204071914, 28.51206692493077, 32.77968191399354, 28.584235982704435, 0.3469596106642303, 0.21135871019468716, 0.2436753234471448, 0.10945504513795974, 0.13125340796572732, 0.08468436946452988, 0.13442697788417168, 0.04178208602692706, 0.8332281984310828, 0.061800332763790995, 0.06369204732842552, 0.18696571427786066, 0.20459813044001843, 0.12135317710785151, 0.12290262555070174, 0.14859485070165893, 0.08208921318861732, 0.06038354276418848, 0.08124948469928857, 0.08115575951151154, 0.20006828353649544, 0.06034788452252878, 0.040689685465837774, 0.4443749100608289, 0.06129678919840252, 0.03919200534295807, 0.039617702781308585, 0.03938829214063075, 0.13697919207028061, 0.11491970464548106, 0.6502629339072264, 0.17729523718970305, 0.411753897184447, 0.6566686434931827, 0.1195014761455672, 0.590622241734199, 1.4572516673277345, 0.13550857497646543, 0.1699567601847217, 0.40926464208126845, 0.2803209885654868, 1.2873812397684945, 1.0048752326525263, 0.7309485115085608, 0.3827354810357466, 12.637021669660681, 1.7428723348813564, 1.6381815855090516, 2.769626725360045, 2.469580450914501, 0.7498439550924326, 1.6866854912983553, 1.4616163230782029, 1.2055978666663076, 3.330988553244524, 1.6167421764253085, 8.725057733620723, 2.12478982346342, 4.951584755051168, 4.315875275877912, 1.5397209180111138, 2.203297313041425, 7.142928077432784, 5.682269960885034, 4.357923300500388, 4.308876258810682, 9.414885552218264, 6.671286951558469, 5.22712468548778, 3.039191508607327, 2.6572839130130412, 2.7063816154973, 8.471461537216205, 3.934069001872732, 5.137767306171601, 2.6542588330361423, 6.353685950801212, 4.123843468555472, 5.1567074856287745, 4.5192159640857446, 4.141319920678175, 3.966908937871499, 3.19194663245482, 4.719227824977662, 4.220253787157977, 4.705660733739358, 3.189032658340653, 4.140957250269099, 3.9992026939736243, 3.842910011639526, 3.277709921925665, 3.211894592095485, 3.2216564706064337], "Total": [1195.0, 961.0, 1407.0, 873.0, 1431.0, 966.0, 823.0, 635.0, 618.0, 487.0, 679.0, 729.0, 758.0, 728.0, 610.0, 411.0, 500.0, 611.0, 457.0, 668.0, 540.0, 673.0, 455.0, 448.0, 642.0, 506.0, 427.0, 361.0, 372.0, 396.0, 46.26949659094198, 35.56784901430009, 16.99601807451592, 10.403003224865609, 48.80353294944979, 6.655490681157698, 10.565485858770964, 9.498133796461767, 10.829878561950213, 3.777344379589113, 18.68129499377603, 12.964792060551067, 34.87740927056573, 12.00358077984416, 3.8985050712953844, 5.689233230110952, 3.8645076572407833, 4.799944550589734, 19.818093617305923, 16.124253958087706, 10.949369239335514, 2.9310392106082115, 11.667424757072348, 3.794357238976192, 3.71118813286369, 5.6201264933949355, 4.967498710398547, 65.48756484455541, 41.663530023406736, 2.9397519335052804, 57.32453226622249, 34.7873591043794, 35.63244194529723, 35.47366361782921, 114.96772009043424, 39.37783633700974, 53.731468566411564, 20.919135857458, 46.07749249435479, 45.19743286383294, 339.29034503212927, 353.2032144071574, 274.3522996039958, 77.71135664815532, 1407.273660704956, 372.0398605412698, 192.76994848935044, 487.1202850504721, 248.08395488753914, 262.8474984018128, 191.28221975712316, 385.44722482884146, 1431.984008449568, 170.6696618202805, 673.671027390825, 642.8982434526419, 400.7476191147177, 961.5305415932702, 1195.6862574171353, 728.902310820616, 429.24297961056976, 618.5857198809623, 506.47366644701617, 873.441661941156, 356.92009944423427, 373.99838684654475, 610.2302484575663, 823.6936357079712, 521.2301962519678, 729.2915932180222, 966.4005498599995, 679.116562738571, 611.9734151321558, 562.8345892296189, 422.9248633538833, 758.4270425720434, 500.87214062634985, 635.8167746006652, 515.3836205188635, 668.1881093519099, 540.6046981534062, 57.519225846783286, 22.030677575857364, 12.762721234249081, 11.076081623172366, 39.772551231994086, 7.960908819967716, 39.464189950966116, 10.416784706941975, 32.06597908099562, 8.052683096862646, 7.757508002326741, 8.938244755540572, 7.892167465260508, 2.9137740767544713, 23.305939027642143, 3.6919424397062746, 7.903820763868093, 3.69235721977951, 4.936304351054405, 5.65151721850273, 1.9706876502478634, 3.8592361309170613, 10.857951333323772, 31.392613743296447, 2.9633588753443436, 32.363767793685696, 27.9497642568613, 13.392415642026759, 6.658935821243224, 18.465462729780537, 93.93700198326073, 81.36310600311428, 34.940056766417875, 41.76591986657553, 42.52071852766598, 36.95596170899812, 44.9594456983409, 85.49879868100693, 44.12254717652642, 34.821429100141756, 175.72231570527924, 108.95731563003001, 122.4720222877545, 823.6936357079712, 117.54288172041171, 234.79125123024468, 224.09978232045, 457.6020438017594, 183.9777113462449, 155.83254846190889, 679.116562738571, 243.7420205987841, 642.8982434526419, 243.6911925299078, 1407.273660704956, 758.4270425720434, 540.6046981534062, 635.8167746006652, 290.6353179951054, 728.902310820616, 344.5554213436141, 562.8345892296189, 1431.984008449568, 966.4005498599995, 1195.6862574171353, 455.574061055531, 729.2915932180222, 374.336987173198, 506.47366644701617, 293.1111838607104, 264.26049708072264, 515.3836205188635, 668.1881093519099, 673.671027390825, 335.66573368172186, 448.2831991780111, 618.5857198809623, 873.441661941156, 961.5305415932702, 521.2301962519678, 611.9734151321558, 411.93644338116826, 479.20331553453616, 500.87214062634985, 610.2302484575663, 19.769440970680876, 8.79029980453839, 5.724683985550034, 3.8061665792678, 9.633197836922887, 13.203591443492636, 3.6912851409869347, 47.523503150076365, 8.597812759957197, 24.060914457030652, 40.716315570542385, 8.641409036664125, 4.627660825549739, 9.394257978291074, 4.581293548854388, 7.723616819224283, 5.560070816114689, 3.7869905627957716, 4.757117498393694, 3.9150807755907975, 11.669688942189307, 4.772910629026891, 4.568697013863613, 43.33326670513176, 5.7611943256755085, 4.739239721771799, 8.555252982728824, 65.91959652892822, 2.82976024829395, 2.73932483516138, 22.338371667706085, 21.13644214736056, 18.365515235230543, 98.12751382281128, 62.84326627549161, 158.9880817763593, 37.4926810858836, 31.687899726785353, 28.038238935133077, 62.02957400452683, 217.59398152552058, 79.32230082919055, 57.63808458163869, 59.951471280849134, 635.8167746006652, 668.1881093519099, 319.25998454254005, 122.68403702836332, 1407.273660704956, 78.46232112754637, 318.54991662618966, 78.55704420539193, 448.2831991780111, 425.3948949085436, 1431.984008449568, 240.85904958358205, 961.5305415932702, 1195.6862574171353, 611.9734151321558, 729.2915932180222, 427.55208645610537, 129.34622975005055, 618.5857198809623, 521.2301962519678, 610.2302484575663, 249.22173329384376, 728.902310820616, 966.4005498599995, 449.99716905582846, 182.15302429248712, 679.116562738571, 506.47366644701617, 515.3836205188635, 353.2032144071574, 758.4270425720434, 429.24297961056976, 487.1202850504721, 479.20331553453616, 823.6936357079712, 642.8982434526419, 673.671027390
Download .txt
gitextract_kymjsk4w/

├── LICENSE
├── general-data-science/
│   └── similarities-measures/
│       ├── pyproject.toml
│       └── similarity-measures.ipynb
├── natural-language-processing/
│   ├── embedding-models/
│   │   ├── data/
│   │   │   └── training_data.csv
│   │   ├── domain_adaption_fine_tune_nlp_model.ipynb
│   │   ├── logs/
│   │   │   └── fit/
│   │   │       └── 20230630-152712/
│   │   │           ├── train/
│   │   │           │   └── events.out.tfevents.1688153232.Shashanks-MacBook-Pro-2.local.44453.0.v2
│   │   │           └── validation/
│   │   │               └── events.out.tfevents.1688153249.Shashanks-MacBook-Pro-2.local.44453.1.v2
│   │   ├── packages/
│   │   │   └── tensorflow_text-2.10.0-cp39-cp39-macosx_11_0_arm64.whl
│   │   ├── pyproject.toml
│   │   └── utils.py
│   ├── text-processing/
│   │   ├── Building Blocks Text Pre-Processing.ipynb
│   │   └── pyproject.toml
│   ├── topic-modeling/
│   │   ├── Evaluate Topic Models.ipynb
│   │   ├── Introduction to Topic Modeling.ipynb
│   │   ├── pyproject.toml
│   │   └── results/
│   │       ├── lda_tuning_results.csv
│   │       ├── ldavis_prepared_10
│   │       ├── ldavis_prepared_10.html
│   │       ├── ldavis_tuned_8
│   │       └── ldavis_tuned_8.html
│   └── transformers-series/
│       ├── pyproject.toml
│       └── sentiment_analysis_bert.ipynb
└── recommender/
    ├── published_notebooks/
    │   └── recommendation_python_lightfm.ipynb
    └── results/
        ├── profiler_books_metadata_1.html
        ├── profiler_books_metadata_2.html
        └── profiler_interactions.html
Download .txt
SYMBOL INDEX (4 symbols across 1 files)

FILE: natural-language-processing/embedding-models/utils.py
  function plot_similarity (line 12) | def plot_similarity(labels, features, rotation, version):
  function cosine_similarity (line 25) | def cosine_similarity(vector_1, vector_2):
  function sts_benchmark (line 47) | def sts_benchmark(model, type="dev"):
  function process_model_input (line 121) | def process_model_input(data):
Condensed preview — 26 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (948K chars).
[
  {
    "path": "LICENSE",
    "chars": 35149,
    "preview": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
  },
  {
    "path": "general-data-science/similarities-measures/pyproject.toml",
    "chars": 441,
    "preview": "[tool.poetry]\nname = \"similarities-measures\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Shashank Kapadia <shashank.k"
  },
  {
    "path": "general-data-science/similarities-measures/similarity-measures.ipynb",
    "chars": 16689,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1d654a7e-fe50-495a-9235-303b16100d51\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "natural-language-processing/embedding-models/data/training_data.csv",
    "chars": 9030,
    "preview": "base,ref,similarity\ndresser,data scientist,0.5\nclubhost,data scientist,0.5\nco-pilot,data analyst,0.5\ndata analyst,textil"
  },
  {
    "path": "natural-language-processing/embedding-models/domain_adaption_fine_tune_nlp_model.ipynb",
    "chars": 15423,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cba912be-879a-4a9b-a7e9-180f42555cb9\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "natural-language-processing/embedding-models/pyproject.toml",
    "chars": 680,
    "preview": "[tool.poetry]\nname = \"embedding-models\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Shashank Kapadia <smhkapadia@gmai"
  },
  {
    "path": "natural-language-processing/embedding-models/utils.py",
    "chars": 4932,
    "preview": "import numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tqdm\nfrom scipy "
  },
  {
    "path": "natural-language-processing/text-processing/Building Blocks Text Pre-Processing.ipynb",
    "chars": 11891,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Building Blocks: Text Pre-Proces"
  },
  {
    "path": "natural-language-processing/text-processing/pyproject.toml",
    "chars": 361,
    "preview": "[tool.poetry]\nname = \"text-processing\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Shashank Kapadia <smhkapadia@gmail"
  },
  {
    "path": "natural-language-processing/topic-modeling/Evaluate Topic Models.ipynb",
    "chars": 28417,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Evaluate Topic Model in Python:"
  },
  {
    "path": "natural-language-processing/topic-modeling/Introduction to Topic Modeling.ipynb",
    "chars": 15237,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Introduction\\n\",\n    \"##### How "
  },
  {
    "path": "natural-language-processing/topic-modeling/pyproject.toml",
    "chars": 488,
    "preview": "[tool.poetry]\nname = \"topic-modeling\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Shashank Kapadia <shashank.kapadia@"
  },
  {
    "path": "natural-language-processing/topic-modeling/results/lda_tuning_results.csv",
    "chars": 27591,
    "preview": "Validation_Set,Topics,Alpha,Beta,Coherence\n75% Corpus,2,0.01,0.01,0.25978135607988706\n75% Corpus,2,0.01,0.31,0.266847672"
  },
  {
    "path": "natural-language-processing/topic-modeling/results/ldavis_prepared_10.html",
    "chars": 199041,
    "preview": "\n<link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.jsdelivr.net/gh/bmabey/pyLDAvis@3.3.1/pyLDAvis/js/ldavis.v1.0."
  },
  {
    "path": "natural-language-processing/topic-modeling/results/ldavis_tuned_8.html",
    "chars": 144535,
    "preview": "\n<link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.jsdelivr.net/gh/bmabey/pyLDAvis@3.3.1/pyLDAvis/js/ldavis.v1.0."
  },
  {
    "path": "natural-language-processing/transformers-series/pyproject.toml",
    "chars": 596,
    "preview": "[tool.poetry]\nname = \"transformers-models\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Shashank Kapadia <smhkapadia@g"
  },
  {
    "path": "natural-language-processing/transformers-series/sentiment_analysis_bert.ipynb",
    "chars": 96889,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bb537d94-7c06-416c-823c-2111e595065f\",\n   \"metadata\": {},\n   \"so"
  },
  {
    "path": "recommender/published_notebooks/recommendation_python_lightfm.ipynb",
    "chars": 77168,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Recommendation in Python: LighFM"
  },
  {
    "path": "recommender/results/profiler_books_metadata_1.html",
    "chars": 87976,
    "preview": "<!doctype html>\n\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n\n  <title>Profile report</title>\n  <meta name=\"descrip"
  },
  {
    "path": "recommender/results/profiler_books_metadata_2.html",
    "chars": 84793,
    "preview": "<!doctype html>\n\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n\n  <title>Profile report</title>\n  <meta name=\"descrip"
  },
  {
    "path": "recommender/results/profiler_interactions.html",
    "chars": 42890,
    "preview": "<!doctype html>\n\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n\n  <title>Profile report</title>\n  <meta name=\"descrip"
  }
]

// ... and 5 more files (download for full content)

About this extraction

This page contains the full source code of the kapadias/mediumposts GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 26 files (879.1 KB), approximately 394.7k tokens, and a symbol index with 4 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!