[
  {
    "path": ".gitignore",
    "content": "venv/*\ndata/\nresults/\n.idea/\n*.pyc\nros_gp_mpc/acados_models/*\nc_generated_code/\n*.egg-info/\ndist/\nbuild/\nacados_models/"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"ml-casadi\"]\n\tpath = ml-casadi\n\turl = https://github.com/TUM-AAS/ml-casadi\n"
  },
  {
    "path": "LICENSE",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>.\n"
  },
  {
    "path": "README.md",
    "content": "# Real-Time Neural MPC\n\nThis repository contains the code for experiments associated to our paper \n\n```\nReal-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms\n```\n[Arxiv Link](https://arxiv.org/pdf/2203.07747)\n\nIf you are looking for the ML-CasADi framework code you can find it [here](https://github.com/TUM-AAS/ml-casadi).\n\n## Installation\n### Checkout Submodules\n```\ngit submodule update --init --recursive\n```\n### Acados\n- Follow the [installation instructions](https://docs.acados.org/installation/index.html).\n- Install the [Python Interface](https://docs.acados.org/python_interface/index.html).\n- Ensure that `LD_LIBRARY_PATH` is set correctly (`DYLD_LIBRARY_PATH`on MacOS).\n- Ensure that `ACADOS_SOURCE_DIR` is set correctly.\n\n### Further Requirements\n```\npip install -r requirements.txt\n```\n\nMake sure the ML-CasADi framework is part of the python path.\n```\nexport PYTHONPATH=\"${PYTHONPATH}:<path-to-git>/ml-casadi\"\n```\nPython 3.9 is recommended.\n\n# Experiments\nThe provided code is based on the work of [Torrente et al.](https://github.com/uzh-rpg/data_driven_mpc) All functionality of the original code base is retained.\n\nChange the working directory to\n```\ncd ros_dd_mpc\n```\n## Simulation\n### Data Collection\nRun the following script to collect a few minutes of flight samples\n```\npython src/experiments/point_tracking_and_record.py --recording --dataset_name simplified_sim_dataset --simulation_time 300\n```\n\n### Fitting a MLP Model\nEdit the following variables of configuration file in `config/configuration_parameters.py` (class `ModelFitConfig`) so that the training script is referenced to the desired dataset. For redundancy, in order to identify the correct data file, we require to specify both the name of the dataset as well as the parameters used while acquiring the data.\nIn other words, you must input the simulator options used while running the previous python script. If you did not modify these variables earlier, you don't need to change anything this time as the default setting will work:\n```\n    # ## Dataset loading ## #\n    ds_name = \"simplified_sim_dataset\"\n    ds_metadata = {\n        \"noisy\": True,\n        \"drag\": True,\n        \"payload\": False,\n        \"motor_noise\": True\n    }\n```\n\nThe following command will train an MLP model with 4 hidden layers 64 neurons each to model the residual error on the velocities in x, y, and z direction (7, 8, 9 in the state).\nWe assign a name to the model for future referencing, e.g.: `simple_sim_mlp`\n```\npython src/model_fitting/mlp_fitting.py --model_name simple_sim_mlp --hidden_size 64 --hidden_layers 4 --x 7 8 9 --y 7 8 9 --epochs 100\n```\nThe model will be saved under the directory `ros_dd_mpc/results/model_fitting/<git_hash>/`\n\n### Fitting GP and RDRv\nFor instructions on how to fit a GP or RDRv model for comparison see the [here](https://github.com/uzh-rpg/data_driven_mpc)\n\n### Test the Fitted Model\n```\npython src/experiments/trajectory_test.py --model_version <git_hash> --model_name simple_sim_mlp --model_typ mlp_approx\n```\nwhere the `model_type` argument can be one of `mlp_approx`(Real-time Neural MPC), `mlp` (Naive Integration), `gp` (Gaussian Process Model).\n\nFor a baseline comparison result run the same script without model parameters:\n```\npython src/experiments/trajectory_test.py\n```\n\nMultiple models can be compared at once via\n```\npython src/experiments/comparative_experiment.py --model_version <git_hash_1 git_hash_2 ...> --model_name <name_1 name_2 ...> --model_type <type_1 type_2> --fast\n```\n\nResults are saved in the `results/` folder.\n\n### Citing\n\nIf you use this code in an academic context, please cite the following publication:\n\n```\n@article{salzmann2023neural,\n  title={Real-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms},\n  author={Salzmann, Tim and Kaufmann, Elia and Arrizabalaga, Jon and Pavone, Marco and Scaramuzza, Davide and Ryll, Markus},\n  journal={IEEE Robotics and Automation Letters},\n  doi={10.1109/LRA.2023.3246839},\n  year={2023}\n}\n```"
  },
  {
    "path": "requirements.txt",
    "content": "numpy==2.2\nscipy==1.15\ntqdm\nmatplotlib==3.9\nscikit-learn==1.6\ncasadi==3.6\npyquaternion\njoblib\npandas\nPyYAML\nrospkg>=1.3\ntorch==2.6\n\n-i https://rospypi.github.io/simple/\nrospy"
  },
  {
    "path": "ros_dd_mpc/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8.3)\nproject(ros_dd_mpc)\n\n## Compile as C++11, supported in ROS Kinetic and newer\n# add_compile_options(-std=c++11)\n\n## Find catkin macros and libraries\n## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)\n## is used, also find other catkin packages\nfind_package(catkin REQUIRED COMPONENTS\n  rospy\n  std_msgs\n  message_generation\n)\n\n## System dependencies are found with CMake's conventions\n# find_package(Boost REQUIRED COMPONENTS system)\n\n\n## Uncomment this if the package has a setup.py. This macro ensures\n## modules and global scripts declared therein get installed\n## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html\n# catkin_python_setup()\n\n################################################\n## Declare ROS messages, services and actions ##\n################################################\n\n## To declare and build messages, services or actions from within this\n## package, follow these steps:\n## * Let MSG_DEP_SET be the set of packages whose message types you use in\n##   your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).\n## * In the file package.xml:\n##   * add a build_depend tag for \"message_generation\"\n##   * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET\n##   * If MSG_DEP_SET isn't empty the following dependency has been pulled in\n##     but can be declared for certainty nonetheless:\n##     * add a exec_depend tag for \"message_runtime\"\n## * In this file (CMakeLists.txt):\n##   * add \"message_generation\" and every package in MSG_DEP_SET to\n##     find_package(catkin REQUIRED COMPONENTS ...)\n##   * add \"message_runtime\" and every package in MSG_DEP_SET to\n##     catkin_package(CATKIN_DEPENDS ...)\n##   * uncomment the add_*_files sections below as needed\n##     and list every .msg/.srv/.action file to be processed\n##   * uncomment the generate_messages entry below\n##   * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)\n\n## Generate messages in the 'msg' folder\nadd_message_files(\n  FILES\n  ReferenceTrajectory.msg\n)\n\n## Generate services in the 'srv' folder\n# add_service_files(\n#   FILES\n#   Service1.srv\n#   Service2.srv\n# )\n\n## Generate actions in the 'action' folder\n# add_action_files(\n#   FILES\n#   Action1.action\n#   Action2.action\n# )\n\n## Generate added messages and services with any dependencies listed here\ngenerate_messages(\n  DEPENDENCIES\n  std_msgs  # Or other packages containing msgs\n)\n\n################################################\n## Declare ROS dynamic reconfigure parameters ##\n################################################\n\n## To declare and build dynamic reconfigure parameters within this\n## package, follow these steps:\n## * In the file package.xml:\n##   * add a build_depend and a exec_depend tag for \"dynamic_reconfigure\"\n## * In this file (CMakeLists.txt):\n##   * add \"dynamic_reconfigure\" to\n##     find_package(catkin REQUIRED COMPONENTS ...)\n##   * uncomment the \"generate_dynamic_reconfigure_options\" section below\n##     and list every .cfg file to be processed\n\n## Generate dynamic reconfigure parameters in the 'cfg' folder\n# generate_dynamic_reconfigure_options(\n#   cfg/DynReconf1.cfg\n#   cfg/DynReconf2.cfg\n# )\n\n###################################\n## catkin specific configuration ##\n###################################\n## The catkin_package macro generates cmake config files for your package\n## Declare things to be passed to dependent projects\n## INCLUDE_DIRS: uncomment this if your package contains header files\n## LIBRARIES: libraries you create in this project that dependent projects also need\n## CATKIN_DEPENDS: catkin_packages dependent projects also need\n## DEPENDS: system dependencies of this project that dependent projects also need\ncatkin_package(\n#  INCLUDE_DIRS include\n#  LIBRARIES ros_python3_issues\n   CATKIN_DEPENDS message_runtime\n#  DEPENDS system_lib\n)\n\n###########\n## Build ##\n###########\n\n## Specify additional locations of header files\n## Your package locations should be listed before other locations\ninclude_directories(\n# include\n  ${catkin_INCLUDE_DIRS}\n)\n\n## Declare a C++ library\n# add_library(${PROJECT_NAME}\n#   src/${PROJECT_NAME}/ros_python3_issues.cpp\n# )\n\n## Add cmake target dependencies of the library\n## as an example, code may need to be generated before libraries\n## either from message generation or dynamic reconfigure\n# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})\n\n## Declare a C++ executable\n## With catkin_make all packages are built within a single CMake context\n## The recommended prefix ensures that target names across packages don't collide\n# add_executable(${PROJECT_NAME}_node src/ros_python3_issues_node.cpp)\n\n## Rename C++ executable without prefix\n## The above recommended prefix causes long target names, the following renames the\n## target back to the shorter version for ease of user use\n## e.g. \"rosrun someones_pkg node\" instead of \"rosrun someones_pkg someones_pkg_node\"\n# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX \"\")\n\n## Add cmake target dependencies of the executable\n## same as for the library above\n# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})\n\n## Specify libraries to link a library or executable target against\n# target_link_libraries(${PROJECT_NAME}_node\n#   ${catkin_LIBRARIES}\n# )\n\n#############\n## Install ##\n#############\n\n# all install targets should use catkin DESTINATION variables\n# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html\n\n## Mark executable scripts (Python etc.) for installation\n## in contrast to setup.py, you can choose the destination\n# install(PROGRAMS\n#   scripts/my_python_script\n#   DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}\n# )\n\n## Mark executables and/or libraries for installation\n# install(TARGETS ${PROJECT_NAME} ${PROJECT_NAME}_node\n#   ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}\n#   LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}\n#   RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}\n# )\n\n## Mark cpp header files for installation\n# install(DIRECTORY include/${PROJECT_NAME}/\n#   DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}\n#   FILES_MATCHING PATTERN \"*.h\"\n#   PATTERN \".svn\" EXCLUDE\n# )\n\n## Mark other files for installation (e.g. launch and bag files, etc.)\n# install(FILES\n#   # myfile1\n#   # myfile2\n#   DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}\n# )\n\n#############\n## Testing ##\n#############\n\n## Add gtest based cpp test target and link libraries\n# catkin_add_gtest(${PROJECT_NAME}-test test/test_ros_python3_issues.cpp)\n# if(TARGET ${PROJECT_NAME}-test)\n#   target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})\n# endif()\n\n## Add folders to be run by python nosetests\n# catkin_add_nosetests(test)\n\n# if (CATKIN_ENABLE_TESTING)\n#   find_package(rostest REQUIRED)\n#   add_rostest(test/test_issue_rosunit.launch)\n# endif()\n"
  },
  {
    "path": "ros_dd_mpc/config/agisim_simulation_run.yaml",
    "content": "quad_name: 'kingfisher'\nworld_limits: None\nuse_ekf_synchronization: False\ncontrol_freq_factor: 5\nenvironment: 'agisim'"
  },
  {
    "path": "ros_dd_mpc/config/arena_limits.yaml",
    "content": "x_min: -6.0\nx_max: 12.0\ny_min: -7.0\ny_max: 7.0\nz_min: 1.0\nz_max: 4.0"
  },
  {
    "path": "ros_dd_mpc/config/arena_run.yaml",
    "content": "quad_name: 'kingfisher'\nworld_limits: arena_limits\nuse_ekf_synchronization: False\ncontrol_freq_factor: 5\nenvironment: 'arena'"
  },
  {
    "path": "ros_dd_mpc/config/basic.world",
    "content": "<?xml version=\"1.0\" ?>\n<sdf version=\"1.4\">\n  <world name=\"default\">\n    <include>\n      <uri>model://ground_plane</uri>\n    </include>\n    <include>\n      <uri>model://sun</uri>\n    </include>\n\n    <!-- Only one ROS interface plugin is required per world, as any other plugin can connect a Gazebo\n         topic to a ROS topic (or vise versa). -->\n    <plugin name=\"ros_interface_plugin\" filename=\"librotors_gazebo_ros_interface_plugin.so\"/>\n\n    <spherical_coordinates>\n      <surface_model>EARTH_WGS84</surface_model>\n      <latitude_deg>47.3667</latitude_deg>\n      <longitude_deg>8.5500</longitude_deg>\n      <elevation>500.0</elevation>\n      <heading_deg>0</heading_deg>\n    </spherical_coordinates>\n    <physics type='ode'>\n      <ode>\n        <solver>\n          <type>quick</type>\n          <iters>1000</iters>\n          <sor>1.3</sor>\n        </solver>\n        <constraints>\n          <cfm>0</cfm>\n          <erp>0.2</erp>\n          <contact_max_correcting_vel>100</contact_max_correcting_vel>\n          <contact_surface_layer>0.001</contact_surface_layer>\n        </constraints>\n      </ode>\n      <max_step_size>0.01</max_step_size>\n      <real_time_factor>0.5</real_time_factor>\n      <real_time_update_rate>50</real_time_update_rate>\n      <gravity>0 0 -9.8</gravity>\n    </physics>\n  </world>\n</sdf>\n"
  },
  {
    "path": "ros_dd_mpc/config/circle_and_lemniscate_options.yaml",
    "content": "# Parameters for the loop and lemniscate trajectories at successively increasing speed\nloop_z: 2.5              # Z position of loop plane [m]\nloop_r: 5.0            # Radius of loop [m]\nloop_v_max: 10.0       # Maximum speed achieved (approx.) [m/s]\nloop_lin_a: 0.25       # Linear acceleration and deceleration of trajectory [m/s^2]\nloop_clockwise: False  # Rotation direction of the quad\nloop_yawing: False     # Yaw quadrotor along the circle (only for loop)"
  },
  {
    "path": "ros_dd_mpc/config/configuration_parameters.py",
    "content": "\"\"\" Set of tunable parameters for the Simplified Simulator and model fitting.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport os\n\n\nclass DirectoryConfig:\n    \"\"\"\n    Class for storing directories within the package\n    \"\"\"\n\n    _dir_path = os.path.dirname(os.path.realpath(__file__))\n    SAVE_DIR = _dir_path + '/../results/model_fitting'\n    RESULTS_DIR = _dir_path + '/../results'\n    CONFIG_DIR = _dir_path + ''\n    DATA_DIR = _dir_path + '/../data'\n\n\nclass SimpleSimConfig:\n    \"\"\"\n    Class for storing the Simplified Simulator configurations.\n    \"\"\"\n\n    # Set to True to show a real-time Matplotlib animation of the experiments for the Simplified Simulator. Execution\n    # will be slower if the GUI is turned on. Note: setting to True may require some further library installation work.\n    custom_sim_gui = False\n\n    # Set to True to display a plot describing the trajectory tracking results after the execution.\n    result_plots = True\n\n    # Set to True to show the trajectory that will be executed before the execution time\n    pre_run_debug_plots = True\n\n    # Choice of disturbances modeled in our Simplified Simulator. For more details about the parameters used refer to\n    # the script: src/quad_mpc/quad_3d.py.\n    simulation_disturbances = {\n        \"noisy\": True,                       # Thrust and torque gaussian noises\n        \"drag\": True,                        # 2nd order polynomial aerodynamic drag effect\n        \"payload\": False,                    # Payload force in the Z axis\n        \"motor_noise\": True                  # Asymmetric voltage noise in the motors\n    }\n\n\nclass ModelFitConfig:\n    \"\"\"\n    Class for storing flags for the model fitting scripts.\n    \"\"\"\n\n    # Dataset loading ## #\n    ds_name = \"simplified_sim_dataset\"\n    ds_metadata = {\n        \"noisy\": True,\n        \"drag\": True,\n        \"payload\": False,\n        \"motor_noise\": True\n    }\n\n    # ds_name = \"agisim_dataset\"\n    # ds_metadata = {\n    #     \"agisim\": \"default\",\n    # }\n\n    # ds_name = \"arena_dataset\"\n    # ds_metadata = {\n    #     \"arena\": \"default\",\n    # }\n\n    # ds_name = \"neurobem_dataset\"\n    # ds_metadata = {\n    #     \"arena\": \"default\",\n    # }\n\n    # ## Visualization ## #\n    # Training mode\n    visualize_training_result = True\n    visualize_data = False\n\n    # Visualization mode\n    grid_sampling_viz = True\n    x_viz = [7, 8, 9]\n    u_viz = []\n    y_viz = [7, 8, 9]\n\n    # ## Data post-processing ## #\n    histogram_bins = 40              # Cluster data using histogram binning\n    histogram_threshold = 0.001      # Remove bins where the total ratio of data is lower than this threshold\n    velocity_cap = 16                # Also remove datasets point if abs(velocity) > x_cap\n\n    # ############# Experimental ############# #\n\n    # ## Use fit model to generate synthetic data ## #\n    use_dense_model = False\n    dense_model_version = \"\"\n    dense_model_name = \"\"\n    dense_training_points = 200\n\n    # ## Clustering for multidimensional models ## #\n    clusters = 1\n    load_clusters = False\n\n\nclass GroundEffectMapConfig:\n    \"\"\"\n    Class for storing parameters for the ground effect map.\n    \"\"\"\n    resolution = 0.1\n    origin = (-4, 9)\n    horizon = ((-7, 7), (-7, 7))\n    box_min = (-4.25, 9.37)\n    box_max = (-2.76, 10.13)\n    box_height = 0.7\n"
  },
  {
    "path": "ros_dd_mpc/config/flyingroom_limits.yaml",
    "content": "x_min: -0.4\nx_max: 4.3\ny_min: -2.4\ny_max: 2.3\nz_min: 0.5\nz_max: 2.0"
  },
  {
    "path": "ros_dd_mpc/config/ground_effect_limits.yaml",
    "content": "x_min: -4.25\nx_max: -2.76\ny_min: 9.37\ny_max: 10.13\nz_min: 0.8 # 0.95\nz_max: 0.85"
  },
  {
    "path": "ros_dd_mpc/config/kingfisher.yaml",
    "content": "mass:               0.752                    # [kg]\ntbm_fr:             [ 0.075, -0.10, 0.0]          # [m]\ntbm_bl:             [-0.075,  0.10, 0.0]         # [m]\ntbm_br:             [-0.075, -0.10, 0.0]         # [m]\ntbm_fl:             [ 0.075,  0.10, 0.0]          # [m]\ninertia:            [0.0025, 0.0021, 0.0043]     # [kgm^2]\nmotor_omega_min:    150.0                     # [rad/s]\nmotor_omega_max:    2800.0                    # [rad/s]\nmotor_tau:          0.033                     # [s]\nomega_max:          [10.0, 10.0, 4.0]         # [rad/s]\ncomm_delay:         0.04                      # [s]\n\nthrust_map:         [1.562522e-6, 0.0,  0.0]\nkappa:              0.022                    # [Nm/N]\nthrust_min:         0.0                       # [N]\nthrust_max:         8.5                      # [N] per motor\nrotors_config:      \"cross\"\n\naero_coeff_1:       [0.0, 0.0, 0.0] # [0.26, 0.28, 0.42] # [N/(v/m)]\naero_coeff_3:       [0.0, 0.0, 0.0] # [N/(v/m)]\naero_coeff_h:        0.0            # 0.01 [N/(v^2/m^2)]\n"
  },
  {
    "path": "ros_dd_mpc/config/simulation_run.yaml",
    "content": "quad_name: 'kingfisher'\nworld_limits: None\nuse_ekf_synchronization: False\ncontrol_freq_factor: 5\nenvironment: 'gazebo'"
  },
  {
    "path": "ros_dd_mpc/launch/dd_mpc_wrapper.launch",
    "content": "<?xml version=\"1.0\"?>\n<launch>\n    <!-- true if running the nodes in the gazebo simulator environment. false if running on the real platform-->\n    <arg name=\"environment\" default=\"flying_room\"/>\n\n    <!-- Recording parameters -->\n    <arg name=\"recording\" default=\"false\"/>\n    <arg name=\"dataset_name\" default=\"gazebo_dataset\"/>\n    <arg name=\"overwrite\" default=\"true\"/>\n    <arg name=\"train_split\" default=\"true\"/>\n    <arg name=\"record_raw_optitrack\" default=\"false\"/>\n\n    <!-- Reference trajectory parameters -->\n    <arg name=\"n_seeds\" default=\"1\"/> <!-- How many random seeds to use to generate trajectories -->\n    <arg name=\"trajectory_speeds\" default=\"[4.0]\"/>\n    <arg name=\"flight_mode\" default=\"random\"/> <!-- one of: random, hover, loop -->\n    <arg name=\"plot\" default=\"false\"/>\n    <arg name=\"world_limits\" default=\"None\"/>\n\n    <!-- Stable version: Use time horizon of 1 second, 10 control nodes and control @ 50 Hz (control_freq_factor=5) -->\n    <arg name=\"t_horizon\" default=\"1\"/>\n    <arg name=\"n_nodes\" default=\"10\"/>\n\n    <!-- Model loading parameters -->\n    <arg name=\"model_version\" default=\"\"/>\n    <arg name=\"model_name\" default=\"\"/>\n    <arg name=\"model_type\" default=\"gp\"/>\n\n    <!-- Trajectory tracking experiment reset -->\n    <arg name=\"reset_experiment\" default=\"true\"/>\n\n    <group ns=\"gp_mpc\">\n\n        <group if=\"$(eval arg('environment')=='agisim')\">\n            <!-- Node that runs the MPC -->\n            <node pkg=\"ros_dd_mpc\" type=\"dd_mpc_node.py\" name=\"mpc_wrapper\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/agisim_simulation_run.yaml\"/>\n\n                <param name=\"recording\" value=\"$(arg recording)\"/>\n                <param name=\"dataset_name\" value=\"$(arg dataset_name)\"/>\n                <param name=\"overwrite\" value=\"$(arg overwrite)\"/>\n                <param name=\"training_split\" value=\"$(arg train_split)\"/>\n                <param name=\"record_raw_optitrack\" value=\"$(arg record_raw_optitrack)\"/>\n\n                <param name=\"model_type\" value=\"$(arg model_type)\"/>\n                <param name=\"model_version\" value=\"$(arg model_version)\"/>\n                <param name=\"model_name\" value=\"$(arg model_name)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"reset_experiment\" value=\"$(arg reset_experiment)\"/>\n            </node>\n\n            <!-- Random trajectory generator -->\n            <node pkg=\"ros_dd_mpc\" type=\"reference_publisher_node.py\" name=\"ref_gen\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/agisim_simulation_run.yaml\"/>\n                <rosparam file=\"$(find ros_dd_mpc)/config/circle_and_lemniscate_options.yaml\"/>\n\n                <param name=\"n_seeds\" value=\"$(arg n_seeds)\"/>\n                <param name=\"v_list\" value=\"$(arg trajectory_speeds)\"/>\n                <param name=\"mode\" value=\"$(arg flight_mode)\"/>\n                <param name=\"world_limits\" value=\"$(arg world_limits)\"/>\n\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n            </node>\n        </group>\n\n        <group if=\"$(eval arg('environment')=='gazebo')\">\n            <!-- Node that runs the MPC -->\n            <node pkg=\"ros_dd_mpc\" type=\"dd_mpc_node.py\" name=\"mpc_wrapper\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/simulation_run.yaml\"/>\n\n                <param name=\"recording\" value=\"$(arg recording)\"/>\n                <param name=\"dataset_name\" value=\"$(arg dataset_name)\"/>\n                <param name=\"overwrite\" value=\"$(arg overwrite)\"/>\n                <param name=\"training_split\" value=\"$(arg train_split)\"/>\n                <param name=\"record_raw_optitrack\" value=\"$(arg record_raw_optitrack)\"/>\n\n                <param name=\"model_type\" value=\"$(arg model_type)\"/>\n                <param name=\"model_version\" value=\"$(arg model_version)\"/>\n                <param name=\"model_name\" value=\"$(arg model_name)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"reset_experiment\" value=\"$(arg reset_experiment)\"/>\n            </node>\n\n            <!-- Random trajectory generator -->\n            <node pkg=\"ros_dd_mpc\" type=\"reference_publisher_node.py\" name=\"ref_gen\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/simulation_run.yaml\"/>\n                <rosparam file=\"$(find ros_dd_mpc)/config/circle_and_lemniscate_options.yaml\"/>\n\n                <param name=\"n_seeds\" value=\"$(arg n_seeds)\"/>\n                <param name=\"v_list\" value=\"$(arg trajectory_speeds)\"/>\n                <param name=\"mode\" value=\"$(arg flight_mode)\"/>\n                <param name=\"world_limits\" value=\"$(arg world_limits)\"/>\n\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n            </node>\n        </group>\n\n        <group if=\"$(eval arg('environment')=='flying_room')\">\n            <!-- Node that runs the MPC -->\n            <node pkg=\"ros_dd_mpc\" type=\"dd_mpc_node.py\" name=\"mpc_wrapper\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/flying_room_run.yaml\"/>\n\n                <param name=\"recording\" value=\"$(arg recording)\"/>\n                <param name=\"dataset_name\" value=\"$(arg dataset_name)\"/>\n                <param name=\"overwrite\" value=\"$(arg overwrite)\"/>\n                <param name=\"training_split\" value=\"$(arg train_split)\"/>\n                <param name=\"record_raw_optitrack\" value=\"$(arg record_raw_optitrack)\"/>\n\n                <param name=\"model_type\" value=\"$(arg model_type)\"/>\n                <param name=\"model_version\" value=\"$(arg model_version)\"/>\n                <param name=\"model_name\" value=\"$(arg model_name)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"reset_experiment\" value=\"$(arg reset_experiment)\"/>\n            </node>\n\n            <!-- Random trajectory generator -->\n            <node pkg=\"ros_dd_mpc\" type=\"reference_publisher_node.py\" name=\"ref_gen\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/flying_room_run.yaml\"/>\n                <rosparam file=\"$(find ros_dd_mpc)/config/circle_and_lemniscate_options.yaml\"/>\n\n                <param name=\"n_seeds\" value=\"$(arg n_seeds)\"/>\n                <param name=\"v_list\" value=\"$(arg trajectory_speeds)\"/>\n                <param name=\"mode\" value=\"$(arg flight_mode)\"/>\n                <param name=\"world_limits\" value=\"$(arg world_limits)\"/>\n\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n            </node>\n        </group>\n\n        <group if=\"$(eval arg('environment')=='arena')\">\n            <!-- Node that runs the MPC -->\n            <node pkg=\"ros_dd_mpc\" type=\"dd_mpc_node.py\" name=\"mpc_wrapper\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/arena_run.yaml\"/>\n\n                <param name=\"recording\" value=\"$(arg recording)\"/>\n                <param name=\"dataset_name\" value=\"$(arg dataset_name)\"/>\n                <param name=\"overwrite\" value=\"$(arg overwrite)\"/>\n                <param name=\"training_split\" value=\"$(arg train_split)\"/>\n                <param name=\"record_raw_optitrack\" value=\"$(arg record_raw_optitrack)\"/>\n\n                <param name=\"model_type\" value=\"$(arg model_type)\"/>\n                <param name=\"model_version\" value=\"$(arg model_version)\"/>\n                <param name=\"model_name\" value=\"$(arg model_name)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"reset_experiment\" value=\"$(arg reset_experiment)\"/>\n            </node>\n\n            <!-- Random trajectory generator -->\n            <node pkg=\"ros_dd_mpc\" type=\"reference_publisher_node.py\" name=\"ref_gen\" output=\"screen\">\n                <rosparam file=\"$(find ros_dd_mpc)/config/arena_run.yaml\"/>\n                <rosparam file=\"$(find ros_dd_mpc)/config/circle_and_lemniscate_options.yaml\"/>\n\n                <param name=\"n_seeds\" value=\"$(arg n_seeds)\"/>\n                <param name=\"v_list\" value=\"$(arg trajectory_speeds)\"/>\n                <param name=\"mode\" value=\"$(arg flight_mode)\"/>\n                <param name=\"world_limits\" value=\"$(arg world_limits)\"/>\n\n                <param name=\"t_horizon\" value=\"$(arg t_horizon)\"/>\n                <param name=\"n_nodes\" value=\"$(arg n_nodes)\"/>\n\n                <param name=\"plot\" value=\"$(arg plot)\"/>\n            </node>\n        </group>\n\n    </group>\n</launch>"
  },
  {
    "path": "ros_dd_mpc/msg/ReferenceTrajectory.msg",
    "content": "int32 seq_len\nstring traj_name\nfloat64 v_input\nfloat64[] trajectory\nfloat64[] dt\nfloat64[] inputs\n"
  },
  {
    "path": "ros_dd_mpc/nodes/dd_mpc_node.py",
    "content": "#!/usr/bin/env python3.6\n\"\"\" ROS node for the data-augmented MPC, to use in the Gazebo simulator and real world experiments.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport json\nimport os\nimport threading\nimport time\n\nimport numpy as np\nimport pandas as pd\nimport rosbag\nimport rospy\nimport std_msgs.msg\nfrom agiros_msgs.msg import Command\nfrom gazebo_msgs.srv import GetPhysicsProperties\nfrom geometry_msgs.msg import PoseStamped\nfrom nav_msgs.msg import Odometry\nfrom ros_dd_mpc.msg import ReferenceTrajectory\nfrom std_msgs.msg import Bool, Empty\nfrom tqdm import tqdm\n\nfrom src.experiments.point_tracking_and_record import make_record_dict, get_record_file_and_dir, check_out_data\nfrom src.model_fitting.rdrv_fitting import load_rdrv\nfrom src.quad_mpc.create_ros_dd_mpc import ROSDDMPC\nfrom src.utils.utils import jsonify, interpol_mse, quaternion_state_mse, load_pickled_models, \\\n    separate_variables, get_model_dir_and_file\nfrom src.utils.visualization import trajectory_tracking_results, mse_tracking_experiment_plot, \\\n    load_past_experiments, get_experiment_files\n\n\ndef odometry_parse(odom_msg):\n    p = [odom_msg.pose.pose.position.x, odom_msg.pose.pose.position.y, odom_msg.pose.pose.position.z]\n    q = [odom_msg.pose.pose.orientation.w, odom_msg.pose.pose.orientation.x, odom_msg.pose.pose.orientation.y,\n         odom_msg.pose.pose.orientation.z]\n    v = [odom_msg.twist.twist.linear.x, odom_msg.twist.twist.linear.y, odom_msg.twist.twist.linear.z]\n    w = [odom_msg.twist.twist.angular.x, odom_msg.twist.twist.angular.y, odom_msg.twist.twist.angular.z]\n\n    return p, q, v, w\n\ndef state_parse(state_msg):\n    p = [state_msg.pose.position.x, state_msg.pose.position.y, state_msg.pose.position.z]\n    q = [state_msg.pose.orientation.w, state_msg.pose.orientation.x, state_msg.pose.orientation.y,\n         state_msg.pose.orientation.z]\n    v = [state_msg.velocity.linear.x, state_msg.velocity.linear.y, state_msg.velocity.linear.z]\n    w = [state_msg.velocity.angular.x, state_msg.velocity.angular.y, state_msg.velocity.angular.z]\n    omega = state_msg.motor_speeds\n    return p, q, v, w, omega\n\n\ndef make_raw_optitrack_dict():\n    rec_dict_raw = make_record_dict(state_dim=7)\n    # Remove unnecessary entries\n    keys = list(rec_dict_raw.keys())\n    for key in keys:\n        if key not in [\"state_in\", \"timestamp\"]:\n            rec_dict_raw.pop(key)\n    return rec_dict_raw\n\n\ndef make_state_record_dict(state_dim):\n    blank_state_recording_dict = {\n            \"state_in\": np.zeros((0, state_dim)),\n            \"state_ref\": np.zeros((0, state_dim)),\n            \"input_in\": np.zeros((0, 4)),\n            \"timestamp\": np.zeros((0, 1)),\n    }\n    return blank_state_recording_dict\n\n\ndef odometry_skipped_warning(last_seq, current_seq, stage):\n    skip_msg = \"Odometry skipped at %s step. Last: %d, current: %d\" % (stage, last_seq, current_seq)\n    rospy.logwarn(skip_msg)\n\n\nclass DDMPCWrapper:\n    def __init__(self, quad_name, environment=\"agisim\", recording_options=None, load_options=None,\n                 rdrv=None, plot=False, reset_experiment=False):\n\n        if recording_options is None:\n            recording_options = {\"recording\": False}\n\n        # If on a simulation environment, figure out if physics are slowed down\n        if environment == \"gazebo\":\n            try:\n                get_gazebo_physics = rospy.ServiceProxy('/gazebo/get_physics_properties', GetPhysicsProperties)\n                resp = get_gazebo_physics()\n                physics_speed = resp.max_update_rate * resp.time_step\n                rospy.loginfo(\"Physics running at %.2f normal speed\" % physics_speed)\n            except rospy.ServiceException as e:\n                print(\"Service call failed: %s\" % e)\n                physics_speed = 1\n        else:\n            physics_speed = 1\n        self.physics_speed = physics_speed\n\n        self.environment = environment\n        self.plot = plot\n        self.recording_options = recording_options\n\n        # Control at 50 hz. Use time horizon=1 and 10 nodes\n        self.n_mpc_nodes = rospy.get_param('~n_nodes', default=10)\n        self.t_horizon = rospy.get_param('~t_horizon', default=1.0)\n        self.control_freq_factor = rospy.get_param('~control_freq_factor', default=5 if environment == \"gazebo\" else 6)\n        self.opt_dt = self.t_horizon / (self.n_mpc_nodes * self.control_freq_factor)\n\n        # Load trained model\n        mlp_conf = None\n        if load_options is not None:\n            if load_options['model_type'] == 'gp':\n                rospy.loginfo(\"Attempting to load GP model from:\\n   git: {}\\n   name: {}\\n   meta: {}\".format(\n                    load_options[\"git\"], load_options[\"model_name\"], load_options[\"params\"]))\n                pre_trained_models = load_pickled_models(model_options=load_options)\n            else:\n                import torch\n                import ml_casadi.torch as mc\n                from src.model_fitting.mlp_common import NormalizedMLP, QuadResidualModel\n                directory, file_name = get_model_dir_and_file(load_options)\n                saved_dict = torch.load(os.path.join(directory, f\"{file_name}.pt\"), map_location=\"cpu\")\n                model_type = load_options['model_type']\n                if '_qres' in model_type:\n                    mlp_model = QuadResidualModel(saved_dict['hidden_size'], saved_dict['hidden_layers'])\n                else:\n                    mlp_model = mc.nn.MultiLayerPerceptron(saved_dict['input_size'], saved_dict['hidden_size'],\n                                                   saved_dict['output_size'], saved_dict['hidden_layers'], 'Tanh')\n                model = NormalizedMLP(mlp_model, torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['output_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['output_size'],))).float())\n                model.load_state_dict(saved_dict['state_dict'])\n                model.eval()\n                if 'gpu' in model_type:\n                    model = model.to('cuda:0')\n                pre_trained_models = model\n\n                mlp_conf = {'approximated': False, 'v_inp': True, 'u_inp': False, 'ground_map_input': False,\n                            'torque_output': False}\n\n                print(model_type)\n                if model_type.endswith('approx') or model_type.endswith('approx1'):\n                    mlp_conf['approximated'] = True\n                    mlp_conf['approx_order'] = 1\n                if model_type.endswith('approx2'):\n                    mlp_conf['approximated'] = True\n                    mlp_conf['approx_order'] = 2\n                if '_u' in model_type:\n                    mlp_conf['u_inp'] = True\n                if '_ge' in model_type:\n                    mlp_conf['ground_map_input'] = True\n                if '_T' in model_type:\n                    mlp_conf['torque_output'] = True\n                if '_qres' in model_type:\n                    mlp_conf['u_inp'] = True\n                    mlp_conf['torque_output'] = True\n\n            if pre_trained_models is None:\n                rospy.logwarn(\"Model parameters specified did not match with any pre-trained model\")\n        else:\n            pre_trained_models = None\n        self.pre_trained_models = pre_trained_models\n        self.git_v = load_options[\"git\"]\n        if self.pre_trained_models is not None:\n            rospy.loginfo(\"Successfully loaded model\")\n            self.model_name = load_options[\"model_name\"]\n        elif rdrv is not None:\n            self.model_name = \"rdrv\"\n        else:\n            self.model_name = \"Nominal\"\n\n        # Initialize MPC for point tracking\n        self.dd_mpc = ROSDDMPC(self.t_horizon, self.n_mpc_nodes, self.opt_dt, quad_name=quad_name,\n                               model_conf=mlp_conf, point_reference=False, models=pre_trained_models,\n                               rdrv=rdrv)\n\n        # Last state obtained from odometry\n        self.x = None\n\n        # Elapsed time between two recordings\n        self.last_update_time = time.time()\n\n        # Last references. Use hovering activation as input reference\n        self.last_x_ref = None\n        self.last_u_ref = None\n\n        # Reference trajectory variables\n        self.x_ref = None\n        self.t_ref = None\n        self.u_ref = None\n        self.current_idx = 0\n        self.skipped_idx = []\n        self.quad_trajectory = None\n        self.quad_controls = None\n        self.w_control = None\n\n        # Provisional reference for \"waiting_for_reference\" hovering mode\n        self.x_ref_prov = None\n\n        # To measure optimization elapsed time\n        self.optimization_dt = 0\n        self.optimization_steps = 0\n\n        # Thread for MPC optimization\n        self.mpc_thread = threading.Thread()\n\n        # Trajectory tracking experiment. Dims: seed x av_v x n_samples\n        if reset_experiment:\n            self.metadata_dict = {}\n        else:\n            self.metadata_dict, _, _, _ = load_past_experiments()\n        self.mse_exp = np.zeros((0, 0, 0, 1))\n        self.t_opt = np.zeros((0, 0, 0))\n        self.mse_exp_v_max = np.zeros((0, 0))\n        self.ref_traj_name = \"\"\n        self.ref_v = 0\n        self.run_traj_counter = 0\n\n        # Keep track of status of MPC object\n        self.odom_available = False\n\n        # Binary variable to run MPC only once every other odometry callback\n        self.optimize_next = True\n\n        # Binary variable to completely skip an odometry if in flying arena\n        self.skip_next = False\n\n        # Remember the sequence number of the last odometry message received.\n        self.last_odom_seq_number = 0\n\n        # Measure if trajectory starting point is reached\n        self.x_initial_reached = False\n\n        # Variables for recording mode\n        self.recording_warmup = True\n        self.x_pred = None\n        self.w_opt = None\n\n        # Get recording file and directory\n        blank_recording_dict = make_record_dict(state_dim=13)\n        if recording_options[\"recording\"]:\n            record_raw_optitrack = recording_options[\"record_raw_optitrack\"]\n            overwrite = recording_options[\"overwrite\"]\n            metadata = {self.environment: \"default\"}\n\n            rec_dict, rec_file = get_record_file_and_dir(\n                blank_recording_dict, recording_options, simulation_setup=metadata, overwrite=overwrite)\n\n            state_rec_dict = make_state_record_dict(state_dim=13)\n\n            # If flying with the optitrack system, also record raw optitrack estimates\n            if self.environment == \"flying_room\" or self.environment == 'arena' and record_raw_optitrack:\n                rec_dict_raw = make_raw_optitrack_dict()\n                metadata = {self.environment: \"optitrack_raw\"}\n                rec_dict_raw, rec_file_raw = get_record_file_and_dir(\n                    rec_dict_raw, recording_options, simulation_setup=metadata, overwrite=overwrite)\n            else:\n                rec_dict_raw = rec_file_raw = None\n\n            if recording_options[\"record_raw_state_control\"]:\n                record_raw_state_control = True\n                raw_state_control_bag = rosbag.Bag(os.path.splitext(rec_file)[0] + '.bag', 'w')\n\n        else:\n            state_rec_dict = None\n            record_raw_optitrack = False\n            rec_dict = rec_file = None\n            rec_dict_raw = rec_file_raw = None\n            record_raw_state_control = False\n            raw_state_control_bag = None\n\n        self.state_rec_dict = state_rec_dict\n        self.rec_dict = rec_dict\n        self.rec_file = rec_file\n        self.rec_dict_raw = rec_dict_raw\n        self.rec_file_raw = rec_file_raw\n        self.record_raw_state_control = record_raw_state_control\n        self.raw_state_control_bag = raw_state_control_bag\n\n        self.landing = False\n        self.override_land = False\n        self.ground_level = False\n        self.controller_off = False\n\n        # Setup node publishers and subscribers. The odometry (sub) and control (pub) topics will vary depending on\n        # which environment is being used\n        odom_topic = \"/\" + quad_name + \"/agiros_pilot/odometry\"\n        raw_topic = None\n        if self.environment == \"arena\":\n            raw_topic = \"/vicon/\" + quad_name\n\n        land_topic = \"/\" + quad_name + \"/agiros_pilot/land\"\n        control_topic = \"/\" + quad_name + \"/agiros_pilot/feedthrough_command\"\n        off_topic = \"/\" + quad_name + \"/agiros_pilot/off\"\n\n        reference_topic = \"reference\"\n        status_topic = \"busy\"\n\n        # Publishers\n        self.control_pub = rospy.Publisher(control_topic, Command, queue_size=1, tcp_nodelay=True)\n        self.status_pub = rospy.Publisher(status_topic, Bool, queue_size=1)\n        self.off_pub = rospy.Publisher(off_topic, Empty, queue_size=1)\n\n        # Subscribers\n        self.land_sub = rospy.Subscriber(land_topic, Empty, self.land_callback)\n        self.ref_sub = rospy.Subscriber(reference_topic, ReferenceTrajectory, self.reference_callback)\n        self.odom_sub = rospy.Subscriber(odom_topic, Odometry, self.odometry_callback, queue_size=1, tcp_nodelay=True)\n        if raw_topic is not None and record_raw_optitrack:\n            self.raw_sub = rospy.Subscriber(raw_topic, PoseStamped, self.raw_odometry_callback)\n\n        rate = rospy.Rate(1)\n        while not rospy.is_shutdown():\n            # Publish if MPC is busy with a current trajectory\n            msg = Bool()\n            msg.data = not (self.x_ref is None and self.odom_available)\n            self.status_pub.publish(msg)\n            rate.sleep()\n\n    def land_callback(self, _):\n        \"\"\"\n        Trigger landing sequence.\n        :param _: empty message\n        \"\"\"\n\n        rospy.loginfo(\"Controller disabled. Landing with RPG MPC\")\n        self.override_land = True\n\n    def rest_state(self):\n        \"\"\"\n        Set quad reference to hover state at position (0, 0, 0.1)\n        \"\"\"\n        self.last_x_ref = [[self.x[0], self.x[1], self.x[2]], [1, 0, 0, 0], [0, 0, 0], [0, 0, 0]]\n        self.last_u_ref = self.dd_mpc.quad.g[-1] * self.dd_mpc.quad.mass / (self.dd_mpc.quad.max_thrust * 4)\n        self.last_u_ref = self.last_u_ref[0] * np.array([1, 1, 1, 1])\n\n    def create_command_msg(self, w_opt, x_opt):\n        \"\"\"\n        Creates Command Msg from MPC output.\n        \"\"\"\n        next_control = Command()\n        next_control.header = std_msgs.msg.Header()\n        next_control.header.stamp = rospy.Time.now()\n        next_control.is_single_rotor_thrust = True if self.environment == 'agisim' else False\n        next_control.collective_thrust = np.sum(w_opt[:4]) * self.dd_mpc.quad.max_thrust / self.dd_mpc.quad.mass\n        next_control.bodyrates.x = x_opt[1, -3]\n        next_control.bodyrates.y = x_opt[1, -2]\n        next_control.bodyrates.z = x_opt[1, -1]\n        next_control.thrusts = w_opt[:4] * self.dd_mpc.quad.max_thrust\n        return next_control\n\n    def run_mpc(self, odom, recording=True):\n        \"\"\"\n        :param odom: message from subscriber.\n        :type odom: Odometry\n        :param recording: If False, some messages were skipped between this and last optimization. Don't record any data\n        during this optimization if in recording mode.\n        \"\"\"\n\n        if not self.odom_available:\n            return\n\n        # Measure time between initial state was checked in and now\n        dt = odom.header.stamp.to_time() - self.last_update_time\n\n        model_data, x_guess, u_guess = self.set_reference()\n\n        # Run MPC and publish control\n        try:\n            tic = time.time()\n            w_opt, x_opt = self.dd_mpc.optimize(model_data)\n            next_control = self.create_command_msg(w_opt, x_opt)\n            self.optimization_dt += time.time() - tic\n            self.optimization_steps += 1\n            if time.time() - tic > 0.01:\n                print(\"MPC thread. Seq: %d. Topt: %.4f\" % (odom.header.seq, (time.time() - tic) * 1000))\n            # print(\"MPC thread. Seq: %d. Topt: %.4f\" % (odom.header.seq, (time.time() - tic) * 1000))\n            self.control_pub.publish(next_control)\n\n            if self.x_initial_reached and self.current_idx < self.w_control.shape[0]:\n                self.w_control[self.current_idx, 0] = next_control.bodyrates.x\n                self.w_control[self.current_idx, 1] = next_control.bodyrates.y\n                self.w_control[self.current_idx, 2] = next_control.bodyrates.z\n\n        except KeyError:\n            self.recording_warmup = True\n            # Should not happen anymore.\n            rospy.logwarn(\"Tried to run an MPC optimization but MPC is not ready yet.\")\n            return\n\n        if w_opt is not None:\n            # Check out final states. self.recording_warmup can only be true in recording mode.\n            if not self.recording_warmup and recording and self.x_initial_reached:\n                x_out = np.array(self.x)[np.newaxis, :]\n                self.rec_dict = check_out_data(self.rec_dict, x_out, None, self.w_opt, dt)\n            if self.record_raw_state_control:\n                self.raw_state_control_bag.write('control', next_control)\n\n            self.w_opt = w_opt\n            if self.x_initial_reached and self.current_idx < self.quad_controls.shape[0]:\n                self.quad_controls[self.current_idx, :] = np.expand_dims(self.w_opt[:4], axis=0)\n\n    def check_out_initial_state(self, odom):\n        \"\"\"\n        Add the initial state to the recording dictionary and start counting until next optimization\n        :param odom: message from subscriber.\n        :type odom: Odometry\n        \"\"\"\n\n        if self.w_opt is not None:\n            self.last_update_time = odom.header.stamp.to_time()\n            self.rec_dict[\"state_in\"] = np.append(self.rec_dict[\"state_in\"], np.array(self.x)[np.newaxis, :], 0)\n            self.rec_dict[\"timestamp\"] = np.append(self.rec_dict[\"timestamp\"], odom.header.stamp.to_time())\n            if self.current_idx < self.x_ref.shape[0]:\n                self.rec_dict[\"state_ref\"] = np.append(self.rec_dict[\"state_ref\"], self.x_ref[np.newaxis, self.current_idx, :], 0)\n            self.recording_warmup = False\n\n    def reference_callback(self, msg):\n        \"\"\"\n        Callback for receiving a reference trajectory\n        :param msg: message from subscriber\n        :type msg: ReferenceTrajectory\n        \"\"\"\n\n        seq_len = msg.seq_len\n\n        if seq_len == 0:\n            # Hover-in-place mode\n            self.x_ref = self.x[:7]\n            self.u_ref = None\n            self.t_ref = None\n\n            off_msg = Empty()\n            self.off_pub.publish(off_msg)\n            self.controller_off = True\n\n            # If this is the end of a reference tracking experiment, generate the results plot\n            if self.mse_exp.shape[0] != 0 and self.run_traj_counter > 0:\n                self.plot_tracking_mse_experiment()\n\n            self.landing = False\n            rospy.loginfo(\"No more references will be received\")\n            if self.raw_state_control_bag is not None:\n                self.record_raw_state_control = False\n                self.raw_state_control_bag.close()\n            return\n\n        # Save reference name\n        self.ref_traj_name = msg.traj_name\n        self.ref_v = msg.v_input\n\n        # Save reference trajectory, relative times and inputs\n        self.x_ref = np.array(msg.trajectory).reshape(seq_len, -1)\n        self.t_ref = np.array(msg.dt)\n        self.u_ref = np.array(msg.inputs).reshape(seq_len, -1)\n        self.quad_trajectory = np.zeros((len(self.t_ref), len(self.x)))\n        self.quad_controls = np.zeros((len(self.t_ref), 4))\n\n        self.w_control = np.zeros((len(self.t_ref), 3))\n\n        rospy.loginfo(\"New trajectory received. Time duration: %.2f s\" % self.t_ref[-1])\n\n    def odometry_callback(self, msg):\n        \"\"\"\n        Callback function for Odometry subscriber\n        :param msg: message from subscriber.\n        :type msg: Odometry\n        \"\"\"\n        if self.controller_off:\n            return\n        p, q, v, w = odometry_parse(msg)\n\n        self.x = p + q + v + w\n\n        try:\n            # Update the state estimate of the quad\n            self.dd_mpc.set_state(self.x)\n\n        except AttributeError:\n            # The DD MPC object instantiation is still not finished\n            return\n\n        if self.override_land:\n            return\n\n        # If the above try passed then MPC is ready\n        self.odom_available = True\n\n        if self.record_raw_state_control:\n            self.raw_state_control_bag.write('state', msg)\n\n        # We only optimize once every two odometry messages\n        if not self.optimize_next:\n            self.mpc_thread.join()\n\n            # If currently on trajectory tracking, pay close attention to any skipped messages.\n            if self.x_initial_reached:\n\n                # Count how many messages were skipped (ideally 0)\n                skipped_messages = int(msg.header.seq - self.last_odom_seq_number - 1)\n                if skipped_messages > 0:\n                    warn_msg = \"Recording time skipped messages: %d\" % skipped_messages\n                    rospy.logwarn(warn_msg)\n\n                # Adjust current index in trajectory\n                for i in range(divmod(skipped_messages, 2)[0]):\n                    if self.current_idx + i < self.x_ref.shape[0]:\n                        self.skipped_idx.append(self.current_idx + i)\n                self.current_idx += divmod(skipped_messages, 2)[0]\n                # If odd number of skipped messages, do optimization\n                if skipped_messages > 0 and skipped_messages % 2 == 1:\n\n                    if self.recording_options[\"recording\"]:\n                        self.check_out_initial_state(msg)\n\n                    # Run MPC now\n                    self.run_mpc(msg)\n                    self.last_odom_seq_number = msg.header.seq\n                    self.optimize_next = False\n                    return\n\n            self.optimize_next = True\n            if self.recording_options[\"recording\"] and self.x_initial_reached:\n                self.check_out_initial_state(msg)\n            return\n\n        # Run MPC\n        if msg.header.seq > self.last_odom_seq_number + 2 and self.last_odom_seq_number > 0 and self.x_initial_reached:\n            # If one message was skipped at this point, then the reference is already late. Compensate by\n            # optimizing twice in a row and hope to do it fast...\n            if self.recording_options[\"recording\"] and self.x_initial_reached:\n                self.check_out_initial_state(msg)\n            self.run_mpc(msg)\n            self.optimize_next = True\n            self.last_odom_seq_number = msg.header.seq\n            odometry_skipped_warning(self.last_odom_seq_number, msg.header.seq, \"optimization\")\n            return\n\n        def _thread_func():\n            self.run_mpc(msg)\n        self.mpc_thread = threading.Thread(target=_thread_func(), args=(), daemon=True)\n        self.mpc_thread.start()\n\n        self.last_odom_seq_number = msg.header.seq\n        self.optimize_next = False\n\n    def raw_odometry_callback(self, msg):\n        \"\"\"\n        Callback function for the raw Optitrack subscriber. Adds the data to the raw data dictionary.\n        :param msg: Raw data from Optitrack estimator\n        :type msg: PoseStamped\n        \"\"\"\n\n        if not self.recording_options[\"recording\"] or not self.x_initial_reached:\n            return\n\n        x = np.array([msg.pose.position.x, msg.pose.position.y, msg.pose.position.z,\n                      msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z])\n\n        self.rec_dict_raw[\"state_in\"] = np.append(self.rec_dict_raw[\"state_in\"], x[np.newaxis, :], 0)\n        self.rec_dict_raw[\"timestamp\"] = np.append(self.rec_dict_raw[\"timestamp\"], msg.header.stamp.to_time())\n\n    def hover_here(self, x):\n        self.rest_state()\n        x_ref = [x[:3], x[3:7], [0, 0, 0], [0, 0, 0]]\n        u_ref = self.last_u_ref\n        x_guess = np.tile(np.concatenate(x_ref)[np.newaxis, :], (self.n_mpc_nodes, 1))\n        u_guess = np.tile(self.last_u_ref[np.newaxis, :], (self.n_mpc_nodes, 1))\n        return self.dd_mpc.set_reference(x_ref, u_ref), x_guess, u_guess\n\n    def set_reference(self):\n\n        if self.environment == \"gazebo\" or self.environment== \"agisim\":\n            th = 0.1\n        else:\n            th = 0.2\n        mask = [1] * 9 + [0] * 3\n\n        x_ref = self.last_x_ref\n        u_ref = self.last_u_ref\n\n        x_guess = None\n        u_guess = None\n\n        if not self.odom_available:\n            return\n\n        # Check if landing mode\n        if self.landing:\n            dz = np.sign(0.3 - self.x[2])\n            dz = dz * 0.1 if self.environment != \"agisim\" else dz * 0.5\n            x_ref[0][2] = min(0.1, self.x[2] + dz) if dz > 0 else max(0.1, self.x[2] + dz)\n\n            # Check if z position is close to target.\n            if abs(self.x[2] - 0.2) < 0.05:\n                executed_x_ref = self.x_ref\n                executed_u_ref = self.u_ref\n                executed_t_ref = self.t_ref\n\n                self.x_ref = None\n                self.u_ref = None\n                self.t_ref = None\n\n                self.x_initial_reached = False\n\n                if self.recording_options[\"recording\"]:\n                    self.save_recording_data()\n\n                # Calculate MSE of position tracking and maximum axial velocity achieved\n                rmse = interpol_mse(np.delete(executed_t_ref, self.skipped_idx, axis=0),\n                                    np.delete(executed_x_ref[:, :3], self.skipped_idx, axis=0),\n                                    np.delete(executed_t_ref, self.skipped_idx, axis=0),\n                                    np.delete(self.quad_trajectory[:, :3], self.skipped_idx, axis=0))\n                self.optimization_dt /= self.optimization_steps\n\n                if self.ref_traj_name in self.metadata_dict.keys():\n                    if self.model_name in self.metadata_dict[self.ref_traj_name].keys():\n                        self.metadata_dict[self.ref_traj_name][self.model_name][self.ref_v] = [rmse,\n                                                                                               self.optimization_dt]\n                    else:\n                        self.metadata_dict[self.ref_traj_name][self.model_name] = {\n                            self.ref_v: [rmse, self.optimization_dt]}\n                else:\n                    self.metadata_dict[self.ref_traj_name] = {\n                        self.model_name: {self.ref_v: [rmse, self.optimization_dt]}}\n\n                n_trajectories = len(self.metadata_dict.keys())\n                n_models = len(self.metadata_dict[self.ref_traj_name].keys())\n                n_vel = len(self.metadata_dict[self.ref_traj_name][self.model_name].keys())\n\n                # Figure out dimensions of data so far\n                self.mse_exp = np.zeros((n_trajectories, n_vel, n_models, 1))\n                self.t_opt = np.zeros((n_trajectories, n_vel, n_models))\n                self.mse_exp_v_max = np.zeros((n_trajectories, n_vel))\n\n                # Add data to array\n                # Dimensions of mse_exp: n_trajectories x n_average_speeds x n_models x n_sim_options\n                for traj_id, traj_type in enumerate(self.metadata_dict.keys()):\n                    for model_id, model_type in enumerate(self.metadata_dict[traj_type].keys()):\n                        for vel_id, vel in enumerate(self.metadata_dict[traj_type][model_type].keys()):\n                            self.mse_exp[traj_id, vel_id, model_id, 0] = self.metadata_dict[traj_type][model_type][vel][0]\n                            self.t_opt[traj_id, vel_id, model_id] = self.optimization_dt\n                            self.mse_exp_v_max[traj_id, vel_id] = vel\n\n                v_max = np.max(self.quad_trajectory[:, 7:10])\n                rospy.loginfo(\"Tracking complete. Total RMSE: %.5f m. Max axial vel: %.3f. \"\n                              \"Mean optimization time: %.3f ms\" % (rmse, v_max, self.optimization_dt * 1000))\n\n                self.current_idx = 0\n                if self.plot:\n                    with_gp = ' + GP ' if self.pre_trained_models is not None else ' - GP '\n                    tit = r'$v_{max}$=%.2f m/s | RMSE: %.4f | %s ' % (v_max, float(rmse), with_gp)\n                    trajectory_tracking_results(np.delete(executed_t_ref, self.skipped_idx, axis=0),\n                                                np.delete(executed_x_ref, self.skipped_idx, axis=0),\n                                                np.delete(self.quad_trajectory, self.skipped_idx, axis=0),\n                                                np.delete(executed_u_ref, self.skipped_idx, axis=0),\n                                                np.delete(self.quad_controls, self.skipped_idx, axis=0),\n                                                w_control=np.delete(self.w_control, self.skipped_idx, axis=0), title=tit)\n                    rospy.loginfo('Saved Plot!')\n\n                # Stop landing. Quad is close to ground level\n                self.landing = False\n                self.ground_level = True\n\n            return self.dd_mpc.set_reference(x_ref, u_ref), x_guess, u_guess\n\n        # Check if reference trajectory is set up. If not, pick current position and keep hover\n        if self.x_ref is None:\n\n            self.ground_level = False\n            # We are waiting for a new reference. Set in provisional hover mode at current position\n            if self.x_ref_prov is None:\n                rospy.loginfo(\"Entering provisional hovering mode while to reference is available at: \")\n                self.x_ref_prov = self.x\n                rospy.loginfo(self.x_ref_prov)\n\n            # Provisional hovering mode\n            return self.hover_here(self.x_ref_prov)\n\n        if self.x_ref_prov is not None:\n            self.x_ref_prov = None\n            rospy.loginfo(\"Abandoning provisional hovering mode.\")\n\n        # Check if reference is hovering mode\n        if isinstance(self.x_ref, list):\n            return self.hover_here(self.x_ref)\n\n        # Trajectory tracking mode. Check if target reached\n        if quaternion_state_mse(np.array(self.x), self.x_ref[0, :], mask) < th and not self.x_initial_reached:\n            if self.record_raw_state_control:\n                self.raw_state_control_bag.write('recording_ctrl', std_msgs.msg.Bool(True))\n            # Initial position of trajectory has been reached\n            self.x_initial_reached = True\n            self.odom_available = False\n            self.optimization_dt = 0\n            rospy.loginfo(\"Reached initial position of trajectory.\")\n            model_data = self.dd_mpc.set_reference(separate_variables(self.x_ref[:1, :]), self.u_ref[:1, :])\n            return model_data, x_guess, u_guess\n\n        # Raise the drone towards the initial position of the trajectory\n        if not self.x_initial_reached:\n            dx = 0.3 * np.sign(self.x_ref[0, 0] - self.x[0])\n            dy = 0.3 * np.sign(self.x_ref[0, 1] - self.x[1])\n            dz = 0.3 * np.sign(self.x_ref[0, 2] - self.x[2])\n            x_ref[0][0] = min(self.x_ref[0, 0], self.x[0] + dx) if dx > 0 else max(self.x_ref[0, 0], self.x[0] + dx)\n            x_ref[0][1] = min(self.x_ref[0, 1], self.x[1] + dy) if dy > 0 else max(self.x_ref[0, 1], self.x[1] + dy)\n            x_ref[0][2] = min(self.x_ref[0, 2], self.x[2] + dz) if dz > 0 else max(self.x_ref[0, 2], self.x[2] + dz)\n\n        elif self.current_idx < self.x_ref.shape[0]:\n\n            self.quad_trajectory[self.current_idx, :] = np.expand_dims(self.x, axis=0)\n\n            # Trajectory tracking\n            ref_traj = self.x_ref[self.current_idx:self.current_idx + self.n_mpc_nodes * self.control_freq_factor, :]\n            ref_u = self.u_ref[self.current_idx:self.current_idx + self.n_mpc_nodes * self.control_freq_factor, :]\n\n            # Indices for down-sampling the reference to number of MPC nodes\n            downsample_ref_ind = np.arange(0, min(self.control_freq_factor * self.n_mpc_nodes, ref_traj.shape[0]),\n                                           self.control_freq_factor, dtype=int)\n\n            # Sparser references (same dt as node separation)\n            x_ref = ref_traj[downsample_ref_ind, :]\n            u_ref = ref_u[downsample_ref_ind, :]\n\n            # Initial guesses\n            u_guess = u_ref\n            x_guess = x_ref\n            while u_guess.shape[0] < self.n_mpc_nodes:\n                x_guess = np.concatenate((x_guess, x_guess[-1:, :]), axis=0)\n                u_guess = np.concatenate((u_guess, u_guess[-1:, :]), axis=0)\n\n            x_ref = separate_variables(x_ref)\n\n            self.current_idx += 1\n\n        # End of reference reached\n        elif self.current_idx == self.x_ref.shape[0]:\n            rospy.loginfo(\"Finished trajectory - Landing.\")\n            # Add one to the completed trajectory counter\n            self.run_traj_counter += 1\n\n            # Lower drone to a safe height\n            self.landing = True\n            self.rest_state()\n            x_ref = self.last_x_ref\n            u_ref = self.last_u_ref\n\n            # Stop recording\n            self.x_initial_reached = False\n            self.recording_warmup = True\n            if self.record_raw_state_control:\n                self.raw_state_control_bag.write('recording_ctrl', std_msgs.msg.Bool(False))\n\n        self.last_x_ref = x_ref\n        self.last_u_ref = u_ref\n        return self.dd_mpc.set_reference(x_ref, u_ref), x_guess, u_guess\n\n    def plot_tracking_mse_experiment(self):\n\n        metadata_file, _, _, _ = get_experiment_files()\n\n        # Save data for reload\n        with open(metadata_file, 'w') as json_file:\n            json.dump(self.metadata_dict, json_file, indent=4)\n\n        # Sort seeds dictionary by value\n        traj_type_labels = [k for k in self.metadata_dict.keys()]\n        model_type_labels = [k for k in self.metadata_dict[traj_type_labels[0]].keys()]\n\n        mse_tracking_experiment_plot(v_max=self.mse_exp_v_max, mse=self.mse_exp, traj_type_vec=traj_type_labels,\n                                     train_samples_vec=model_type_labels, legends=model_type_labels,\n                                     y_labels=[\"RotorS\"], t_opt=self.t_opt)\n\n    def save_recording_data(self):\n\n        # Remove exceeding data entry if needed\n        if len(self.rec_dict['state_in']) > len(self.rec_dict['input_in']):\n            self.rec_dict['state_in'] = self.rec_dict['state_in'][:-1]\n            self.rec_dict['timestamp'] = self.rec_dict['timestamp'][:-1]\n\n        # Compute predictions offline to avoid extra overhead while in trajectory tracking control\n        rospy.loginfo(\"Filling in dataset and saving...\")\n\n        for i in tqdm(range(len(self.rec_dict['input_in']))):\n            x_0 = self.rec_dict['state_in'][i]\n            x_f = self.rec_dict['state_out'][i]\n            u = self.rec_dict['input_in'][i]\n            dt = self.rec_dict['dt'][i]\n            x_pred, _ = self.dd_mpc.quad_mpc.forward_prop(x_0, u, t_horizon=dt)\n            x_pred = x_pred[-1, np.newaxis, :]\n\n            self.rec_dict['state_pred'] = np.append(self.rec_dict['state_pred'], x_pred, axis=0)\n            self.rec_dict['error'] = np.append(self.rec_dict['error'], x_f - x_pred, axis=0)\n\n        # Save datasets\n        x_dim = self.rec_dict[\"state_in\"].shape[1]\n\n        for key in self.rec_dict.keys():\n            print(key, \" \", self.rec_dict[key].shape)\n            self.rec_dict[key] = jsonify(self.rec_dict[key])\n        df = pd.DataFrame(self.rec_dict)\n        df.to_csv(self.rec_file, index=True, mode='a', header=False)\n\n        if self.rec_dict_raw is not None:\n            data_len = min(self.rec_dict_raw[\"state_in\"].shape[0], len(self.rec_dict_raw[\"timestamp\"]))\n\n            # To ensure same length of all entries\n            for key in self.rec_dict_raw.keys():\n                self.rec_dict_raw[key] = self.rec_dict_raw[key][:data_len]\n                print(key, \" \", self.rec_dict_raw[key].shape)\n                self.rec_dict_raw[key] = jsonify(self.rec_dict_raw[key])\n\n            df = pd.DataFrame(self.rec_dict_raw)\n            df.to_csv(self.rec_file_raw, index=True, mode='a', header=False)\n\n            self.rec_dict_raw = make_raw_optitrack_dict()\n\n        # Reset recording dictionaries\n        self.rec_dict = make_record_dict(x_dim)\n        self.state_rec_dict = make_state_record_dict(x_dim)\n\n\ndef main():\n    rospy.init_node(\"dd_mpc\")\n\n    # Recording parameters\n    recording_options = {\n        \"recording\": rospy.get_param('~recording', default=True),\n        \"dataset_name\": \"deleteme\",\n        \"training_split\": True,\n        \"overwrite\": True,\n        \"record_raw_optitrack\": True,\n        \"record_raw_state_control\": True,\n    }\n\n    dataset_name = rospy.get_param('~dataset_name', default=None)\n    overwrite = rospy.get_param('~overwrite', default=None)\n    training = rospy.get_param('~training_split', default=None)\n    raw_optitrack = rospy.get_param('~record_raw_optitrack', default=None)\n    raw_state_control = rospy.get_param('~record_raw_state_control', default=None)\n    if dataset_name is not None:\n        recording_options[\"dataset_name\"] = dataset_name\n    if overwrite is not None:\n        recording_options[\"overwrite\"] = overwrite\n    if training is not None:\n        recording_options[\"training_split\"] = training\n    if raw_optitrack is not None:\n        recording_options[\"record_raw_optitrack\"] = raw_optitrack\n    if raw_state_control is not None:\n        recording_options[\"record_raw_state_control\"] = raw_state_control\n\n    # Model loading parameters\n    load_options = {\n        \"git\": \"b6e73a5\",\n        \"model_name\": \"\",\n        \"params\": None\n    }\n    git_id = rospy.get_param('~model_version', default=None)\n    model_name = rospy.get_param('~model_name', default=None)\n    model_type = rospy.get_param('~model_type', default=\"gp\")\n    if git_id is not None:\n        load_options[\"git\"] = git_id\n    if model_name is not None:\n        load_options[\"model_name\"] = str(model_name)\n    if model_type is not None:\n        load_options[\"model_type\"] = str(model_type)\n\n    plot = False\n    plot = rospy.get_param('~plot', default=None) if rospy.get_param('~plot', default=None) is not None else plot\n\n    env = rospy.get_param('~environment', default='gazebo')\n    default_quad = \"hummingbird\" if env == \"gazebo\" else \"kingfisher\"\n    load_options[\"params\"] = {env: \"default\"}\n\n    if model_type == \"rdrv\":\n        rdrv = load_rdrv(model_options=load_options)\n    else:\n        rdrv = None\n\n    quad_name = rospy.get_param('~quad_name', default=None)\n    quad_name = quad_name if quad_name is not None else default_quad\n    # Change if needed. This is currently the supported combination.\n    if env == \"gazebo\":\n        assert quad_name == \"hummingbird\"\n    elif env == \"agisim\":\n        assert quad_name == \"kingfisher\"\n\n    # Reset experiments switch\n    reset = rospy.get_param('~reset_experiment', default=True)\n\n    DDMPCWrapper(quad_name, env, recording_options, load_options, rdrv=rdrv, plot=plot,\n                 reset_experiment=reset)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "ros_dd_mpc/nodes/reference_publisher_node.py",
    "content": "#!/usr/bin/env python3.6\n\"\"\" Node wrapper for publishing trajectories for the MPC pipeline to track.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nfrom std_msgs.msg import Bool\nfrom ros_dd_mpc.msg import ReferenceTrajectory\nfrom src.quad_mpc.create_ros_dd_mpc import custom_quad_param_loader\nfrom src.utils.trajectories import loop_trajectory, random_trajectory, lemniscate_trajectory, flyover_trajectory, \\\n    flyover_trajectory_collect\nimport numpy as np\nimport rospy\n\n\nclass ReferenceGenerator:\n\n    def __init__(self):\n\n        self.gp_mpc_busy = True\n\n        rospy.init_node(\"reference_generator\")\n\n        plot = rospy.get_param('~plot', default=True)\n\n        quad_name = rospy.get_param('~quad_name', default='hummingbird')\n        quad = custom_quad_param_loader(quad_name)\n\n        # Configuration for random flight mode\n        n_seeds = rospy.get_param('~n_seeds', default=1)\n        v_list = rospy.get_param('~v_list', default=[3.5])\n        if isinstance(v_list, str):\n            v_list = v_list.split('[')[1].split(']')[0]\n            v_list = [float(v.strip()) for v in v_list.split(',')]\n\n        # Select if generate \"random\" trajectories, \"hover\" mode or increasing speed \"loop\" mode\n        mode = rospy.get_param('~mode', default=\"random\")\n        if mode != \"random\" and mode != \"flyover\":\n            n_seeds = 1\n\n        # Load parameters of loop trajectory\n        loop_r = rospy.get_param('~loop_r', default=1.5)\n        loop_z = rospy.get_param('~loop_z', default=1)\n        loop_v_max = rospy.get_param('~loop_v_max', default=3)\n        loop_a = rospy.get_param('~loop_lin_a', default=0.075)\n        loop_cc = rospy.get_param('~loop_clockwise', default=True)\n        loop_yawing = rospy.get_param('~loop_yawing', default=True)\n\n        # Load world limits if any\n        map_limits = rospy.get_param('~world_limits', default=None)\n\n        # Control at 50 hz. Use time horizon=1 and 10 nodes\n        n_mpc_nodes = rospy.get_param('~n_nodes', default=10)\n        t_horizon = rospy.get_param('~t_horizon', default=1.0)\n        control_freq_factor = rospy.get_param('~control_freq_factor', default=5 if quad_name == \"hummingbird\" else 6)\n        opt_dt = t_horizon / (n_mpc_nodes * control_freq_factor)\n\n        reference_topic = \"reference\"\n        status_topic = \"busy\"\n        reference_pub = rospy.Publisher(reference_topic, ReferenceTrajectory, queue_size=1)\n        rospy.Subscriber(status_topic, Bool, self.status_callback)\n\n        v_ind = 0\n        seed = 0\n\n        # Calculate total number of trajectories\n        n_trajectories = n_seeds * len(v_list)\n        curr_trajectory_ind = 0\n\n        rate = rospy.Rate(0.2)\n        while not rospy.is_shutdown():\n            if not self.gp_mpc_busy and mode == \"hover\":\n                rospy.loginfo(\"Sending hover-in-place command\")\n                msg = ReferenceTrajectory()\n                reference_pub.publish(msg)\n                rospy.signal_shutdown(\"All trajectories were sent to the MPC\")\n                break\n\n            if not self.gp_mpc_busy and curr_trajectory_ind == n_trajectories:\n                msg = ReferenceTrajectory()\n                reference_pub.publish(msg)\n                rospy.signal_shutdown(\"All trajectories were sent to the MPC\")\n                break\n\n            if not self.gp_mpc_busy and mode == \"loop\":\n                rospy.loginfo(\"Sending increasing speed loop trajectory\")\n                x_ref, t_ref, u_ref = loop_trajectory(quad, opt_dt, v_max=loop_v_max, radius=loop_r, z=loop_z,\n                                                      lin_acc=loop_a, clockwise=loop_cc, map_name=map_limits,\n                                                      yawing=loop_yawing, plot=plot)\n\n                msg = ReferenceTrajectory()\n                msg.traj_name = \"circle\"\n                msg.v_input = loop_v_max\n                msg.seq_len = x_ref.shape[0]\n                msg.trajectory = np.reshape(x_ref, (-1,)).tolist()\n                msg.dt = t_ref.tolist()\n                msg.inputs = np.reshape(u_ref, (-1,)).tolist()\n\n                reference_pub.publish(msg)\n                curr_trajectory_ind += 1\n                self.gp_mpc_busy = True\n\n            elif not self.gp_mpc_busy and mode == \"lemniscate\":\n                rospy.loginfo(\"Sending increasing speed lemniscate trajectory\")\n                x_ref, t_ref, u_ref = lemniscate_trajectory(quad, opt_dt, v_max=loop_v_max, radius=loop_r, z=loop_z,\n                                                            lin_acc=loop_a, clockwise=loop_cc, map_name=map_limits,\n                                                            yawing=loop_yawing, plot=plot)\n\n                msg = ReferenceTrajectory()\n                msg.traj_name = \"lemniscate\"\n                msg.v_input = loop_v_max\n                msg.seq_len = x_ref.shape[0]\n                msg.trajectory = np.reshape(x_ref, (-1,)).tolist()\n                msg.dt = t_ref.tolist()\n                msg.inputs = np.reshape(u_ref, (-1,)).tolist()\n\n                reference_pub.publish(msg)\n                curr_trajectory_ind += 1\n                self.gp_mpc_busy = True\n\n            elif not self.gp_mpc_busy and mode == \"random\":\n\n                speed = v_list[v_ind]\n                log_msg = \"Random trajectory generator %d/%d. Seed: %d. Mean vel: %.3f m/s\" % \\\n                          (curr_trajectory_ind + 1, n_trajectories, seed, speed)\n                rospy.loginfo(log_msg)\n\n                x_ref, t_ref, u_ref = random_trajectory(quad, opt_dt, seed=seed, speed=speed, map_name=map_limits,\n                                                        plot=plot)\n                msg = ReferenceTrajectory()\n                msg.traj_name = \"random_\" + str(seed)\n                msg.v_input = speed\n                msg.seq_len = x_ref.shape[0]\n                msg.trajectory = np.reshape(x_ref, (-1, )).tolist()\n                msg.dt = t_ref.tolist()\n                msg.inputs = np.reshape(u_ref, (-1, )).tolist()\n\n                reference_pub.publish(msg)\n                curr_trajectory_ind += 1\n                self.gp_mpc_busy = True\n\n                if v_ind + 1 < len(v_list):\n                    v_ind += 1\n                else:\n                    seed += 1\n                    v_ind = 0\n\n            elif not self.gp_mpc_busy and mode == \"flyover_collect\":\n\n                speed = v_list[v_ind]\n                log_msg = \"Flyover trajectory generator %d/%d. Seed: %d. Mean vel: %.3f m/s\" % \\\n                          (curr_trajectory_ind + 1, n_trajectories, seed, speed)\n                rospy.loginfo(log_msg)\n\n                x_ref, t_ref, u_ref = flyover_trajectory_collect(quad, opt_dt, seed=seed, speed=speed,\n                                                                 flyover_box_name=map_limits)\n                msg = ReferenceTrajectory()\n                msg.traj_name = \"flyover_\" + str(seed)\n                msg.v_input = speed\n                msg.seq_len = x_ref.shape[0]\n                msg.trajectory = np.reshape(x_ref, (-1, )).tolist()\n                msg.dt = t_ref.tolist()\n                msg.inputs = np.reshape(u_ref, (-1, )).tolist()\n\n                reference_pub.publish(msg)\n                curr_trajectory_ind += 1\n                self.gp_mpc_busy = True\n\n                if v_ind + 1 < len(v_list):\n                    v_ind += 1\n                else:\n                    seed += 1\n                    v_ind = 0\n\n            elif not self.gp_mpc_busy and mode == \"flyover\":\n\n                speed = v_list[v_ind]\n                log_msg = \"Flyover trajectory generator %d/%d. Seed: %d. Mean vel: %.3f m/s\" % \\\n                          (curr_trajectory_ind + 1, n_trajectories, seed, speed)\n                rospy.loginfo(log_msg)\n\n                x_ref, t_ref, u_ref = flyover_trajectory(quad, opt_dt, seed=seed, speed=speed,\n                                                         flyover_box_name=map_limits)\n                msg = ReferenceTrajectory()\n                msg.traj_name = \"flyover_\" + str(seed)\n                msg.v_input = speed\n                msg.seq_len = x_ref.shape[0]\n                msg.trajectory = np.reshape(x_ref, (-1, )).tolist()\n                msg.dt = t_ref.tolist()\n                msg.inputs = np.reshape(u_ref, (-1, )).tolist()\n\n                reference_pub.publish(msg)\n                curr_trajectory_ind += 1\n                self.gp_mpc_busy = True\n\n                if v_ind + 1 < len(v_list):\n                    v_ind += 1\n                else:\n                    seed += 1\n                    v_ind = 0\n\n            elif not self.gp_mpc_busy:\n                raise ValueError(\"Unknown trajectory type: %s\" % mode)\n\n            rate.sleep()\n\n    def status_callback(self, msg):\n        \"\"\"\n        Callback function for tracking if the dd_mpc node is busy\n        :param msg: Message from the subscriber\n        :type msg: Bool\n        \"\"\"\n        self.gp_mpc_busy = msg.data\n\n\nif __name__ == \"__main__\":\n\n    ReferenceGenerator()\n"
  },
  {
    "path": "ros_dd_mpc/package.xml",
    "content": "<?xml version=\"1.0\"?>\n<package format=\"2\">\n  <name>ros_dd_mpc</name>\n  <version>0.0.0</version>\n  <description>The ros_dd_mpc package</description>\n\n  <!-- One maintainer tag required, multiple allowed, one person per tag -->\n  <!-- Example:  -->\n  <!-- <maintainer email=\"jane.doe@example.com\">Jane Doe</maintainer> -->\n  <maintainer email=\"guillemtorrente@hotmail.com\">Guillem Torrente</maintainer>\n\n\n  <!-- One license tag required, multiple allowed, one license per tag -->\n  <!-- Commonly used license strings: -->\n  <!--   BSD, MIT, Boost Software License, GPLv2, GPLv3, LGPLv2.1, LGPLv3 -->\n  <license>TODO</license>\n\n\n  <!-- Url tags are optional, but multiple are allowed, one per tag -->\n  <!-- Optional attribute type can be: website, bugtracker, or repository -->\n  <!-- Example: -->\n  <!-- <url type=\"website\">http://wiki.ros.org/ros_python3_issues</url> -->\n\n\n  <!-- Author tags are optional, multiple are allowed, one per tag -->\n  <!-- Authors do not have to be maintainers, but could be -->\n  <!-- Example: -->\n  <!-- <author email=\"jane.doe@example.com\">Jane Doe</author> -->\n\n\n  <!-- The *depend tags are used to specify dependencies -->\n  <!-- Dependencies can be catkin packages or system dependencies -->\n  <!-- Examples: -->\n  <!-- Use depend as a shortcut for packages that are both build and exec dependencies -->\n  <!--   <depend>roscpp</depend> -->\n  <!--   Note that this is equivalent to the following: -->\n  <!--   <build_depend>roscpp</build_depend> -->\n  <!--   <exec_depend>roscpp</exec_depend> -->\n  <!-- Use build_depend for packages you need at compile time: -->\n  <build_depend>message_generation</build_depend>\n  <!-- Use build_export_depend for packages you need in order to build against this package: -->\n  <!--   <build_export_depend>message_generation</build_export_depend> -->\n  <!-- Use buildtool_depend for build tool packages: -->\n  <!--   <buildtool_depend>catkin</buildtool_depend> -->\n  <!-- Use exec_depend for packages you need at runtime: -->\n  <exec_depend>message_runtime</exec_depend>\n  <!-- Use test_depend for packages you need only for testing: -->\n  <!--   <test_depend>gtest</test_depend> -->\n  <!-- Use doc_depend for packages you need only for building documentation: -->\n  <!--   <doc_depend>doxygen</doc_depend> -->\n  <buildtool_depend>catkin</buildtool_depend>\n  <build_depend>rospy</build_depend>\n  <build_export_depend>rospy</build_export_depend>\n  <exec_depend>rospy</exec_depend>\n  <exec_depend>agiros_msgs</exec_depend>\n  <exec_depend>deep_casadi</exec_depend>\n\n\n  <!-- The export tag contains other, unspecified, tags -->\n  <export>\n    <!-- Other tools can request additional information be placed here -->\n\n  </export>\n</package>\n"
  },
  {
    "path": "ros_dd_mpc/src/__init__.py",
    "content": ""
  },
  {
    "path": "ros_dd_mpc/src/experiments/__init__.py",
    "content": ""
  },
  {
    "path": "ros_dd_mpc/src/experiments/comparative_experiment.py",
    "content": "\"\"\" Runs the experimental setup to compare different data-learned models for the MPC on the Simplified Simulator.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\nimport os.path\nimport time\nimport argparse\nimport numpy as np\nimport torch\nimport ml_casadi.torch as mc\n\nfrom config.configuration_parameters import SimpleSimConfig\nfrom src.model_fitting.mlp_common import NormalizedMLP\nfrom src.quad_mpc.quad_3d_mpc import Quad3DMPC\nfrom src.quad_mpc.quad_3d import Quadrotor3D\nfrom src.utils.quad_3d_opt_utils import get_reference_chunk\nfrom src.utils.utils import load_pickled_models, interpol_mse, separate_variables, get_model_dir_and_file\nfrom src.utils.visualization import initialize_drone_plotter, draw_drone_simulation, trajectory_tracking_results, \\\n    get_experiment_files\nfrom src.utils.visualization import mse_tracking_experiment_plot\nfrom src.utils.trajectories import random_trajectory, lemniscate_trajectory, loop_trajectory\nfrom src.model_fitting.rdrv_fitting import load_rdrv\n\nglobal model_num\n\n\ndef prepare_quadrotor_mpc(simulation_options, version=None, name=None, reg_type=\"gp\", quad_name=None,\n                          t_horizon=1.0, q_diagonal=None, r_diagonal=None, q_mask=None):\n    \"\"\"\n    Creates a Quad3DMPC for the custom simulator.\n    @param simulation_options: Parameters for the Quadrotor3D object.\n    @param version: loading version for the GP/RDRv model.\n    @param name: name to load for the GP/RDRv model.\n    @param reg_type: either `gp` or `rdrv`.\n    @param quad_name: Name for the quadrotor. Default name will be used if not specified.\n    @param t_horizon: Time horizon of MPC in seconds.\n    @param q_diagonal: 12-dimensional diagonal of the Q matrix (p_xyz, a_xyz, v_xyz, w_xyz)\n    @param r_diagonal: 4-dimensional diagonal of the R matrix (motor inputs 1-4)\n    @param q_mask: State variable weighting mask (boolean). Which state variables compute towards state loss function?\n\n    @return: A Quad3DMPC wrapper for the custom simulator.\n    @rtype: Quad3DMPC\n    \"\"\"\n\n    # Default Q and R matrix for LQR cost\n    if q_diagonal is None:\n        q_diagonal = np.array([10, 10, 10, 0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])\n    if r_diagonal is None:\n        r_diagonal = np.array([0.1, 0.1, 0.1, 0.1])\n    if q_mask is None:\n        q_mask = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]).T\n\n    # Simulation integration step (the smaller the more \"continuous\"-like simulation.\n    simulation_dt = 5e-4\n\n    # Number of MPC optimization nodes\n    n_mpc_nodes = 10\n\n    # Calculate time between two MPC optimization nodes [s]\n    node_dt = t_horizon / n_mpc_nodes\n\n    # Quadrotor simulator\n    my_quad = Quadrotor3D(**simulation_options)\n    model_name = quad_name\n\n    # Configuration for MLP Model\n    mlp_conf = None\n\n    if version is not None and name is not None:\n\n        load_ops = {\"params\": simulation_options}\n        load_ops.update({\"git\": version, \"model_name\": name})\n\n        # Load trained GP model\n        if reg_type == \"gp\":\n            pre_trained_models = load_pickled_models(model_options=load_ops)\n            rdrv_d = None\n\n        elif 'mlp' in reg_type:\n            mlp_conf = {'approximated': False, 'v_inp': True, 'u_inp': False, 'T_out': False, 'ground_map_input': False,\n                        'torque_output': False, 'two_step_rti': False}\n            directory, file_name = get_model_dir_and_file(load_ops)\n            saved_dict = torch.load(os.path.join(directory, f\"{file_name}.pt\"))\n            mlp_model = mc.nn.MultiLayerPerceptron(saved_dict['input_size'], saved_dict['hidden_size'],\n                                                   saved_dict['output_size'], saved_dict['hidden_layers'], 'Tanh')\n            model = NormalizedMLP(mlp_model, torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                  torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                  torch.tensor(np.zeros((saved_dict['output_size'],))).float(),\n                                  torch.tensor(np.zeros((saved_dict['output_size'],))).float())\n            model.load_state_dict(saved_dict['state_dict'])\n            model.eval()\n            pre_trained_models = model\n            rdrv_d = None\n\n            if reg_type.endswith('approx2'):\n                mlp_conf['approximated'] = True\n                mlp_conf['approx_order'] = 2\n            elif reg_type.endswith('approx') or reg_type.endswith('approx_1'):\n                mlp_conf['approximated'] = True\n                mlp_conf['approx_order'] = 1\n            if '_u' in reg_type:\n                mlp_conf['u_inp'] = True\n            if '_T' in reg_type:\n                mlp_conf['T_out'] = True\n\n        else:\n            rdrv_d = load_rdrv(model_options=load_ops)\n            pre_trained_models = None\n\n    else:\n        pre_trained_models = rdrv_d = None\n\n    if model_name is None:\n        model_name = \"my_quad_\" + str(globals()['model_num'])\n        globals()['model_num'] += 1\n\n    # Initialize quad MPC\n    quad_mpc = Quad3DMPC(my_quad, t_horizon=t_horizon, optimization_dt=node_dt, simulation_dt=simulation_dt,\n                         q_cost=q_diagonal, r_cost=r_diagonal, n_nodes=n_mpc_nodes,\n                         pre_trained_models=pre_trained_models, model_name=model_name, q_mask=q_mask, rdrv_d_mat=rdrv_d,\n                         model_conf=mlp_conf)\n\n    return quad_mpc\n\n\ndef main(quad_mpc, av_speed, reference_type=None, plot=False):\n    \"\"\"\n\n    :param quad_mpc:\n    :type quad_mpc: Quad3DMPC\n    :param av_speed:\n    :param reference_type:\n    :param plot:\n    :return:\n    \"\"\"\n\n    # Recover some necessary variables from the MPC object\n    my_quad = quad_mpc.quad\n    n_mpc_nodes = quad_mpc.n_nodes\n    simulation_dt = quad_mpc.simulation_dt\n    t_horizon = quad_mpc.t_horizon\n\n    reference_over_sampling = 5\n    mpc_period = t_horizon / (n_mpc_nodes * reference_over_sampling)\n\n    # Choose the reference trajectory:\n    if reference_type == \"loop\":\n        # Circular trajectory\n        reference_traj, reference_timestamps, reference_u = loop_trajectory(\n            quad=my_quad, discretization_dt=mpc_period, radius=5, z=1, lin_acc=av_speed * 0.25, clockwise=True,\n            yawing=False, v_max=av_speed, map_name=None, plot=plot)\n    elif reference_type == \"lemniscate\":\n        # Lemniscate trajectory\n        reference_traj, reference_timestamps, reference_u = lemniscate_trajectory(\n            quad=my_quad, discretization_dt=mpc_period, radius=5, z=1, lin_acc=av_speed * 0.25, clockwise=True,\n            yawing=False, v_max=av_speed, map_name=None, plot=plot)\n    else:\n        # Get a random smooth position trajectory\n        reference_traj, reference_timestamps, reference_u = random_trajectory(\n            quad=my_quad, discretization_dt=mpc_period, seed=reference_type[\"random\"], speed=av_speed, plot=plot)\n\n    # Set quad initial state equal to the initial reference trajectory state\n    quad_current_state = reference_traj[0, :].tolist()\n    my_quad.set_state(quad_current_state)\n\n    real_time_artists = None\n    if plot:\n        # Initialize real time plot stuff\n        world_radius = np.max(np.abs(reference_traj[:, :2])) * 1.2\n        real_time_artists = initialize_drone_plotter(n_props=n_mpc_nodes, quad_rad=my_quad.length,\n                                                     world_rad=world_radius, full_traj=reference_traj)\n\n    start_time = time.time()\n    max_simulation_time = 10000\n\n    ref_u = reference_u[0, :]\n    quad_trajectory = np.zeros((len(reference_timestamps), len(quad_current_state)))\n    u_optimized_seq = np.zeros((len(reference_timestamps), 4))\n\n    # Sliding reference trajectory initial index\n    current_idx = 0\n\n    # Measure the MPC optimization time\n    mean_opt_time = 0.0\n\n    # Measure total simulation time\n    total_sim_time = 0.0\n\n    while (time.time() - start_time) < max_simulation_time and current_idx < reference_traj.shape[0]:\n\n        quad_current_state = my_quad.get_state(quaternion=True, stacked=True)\n\n        quad_trajectory[current_idx, :] = np.expand_dims(quad_current_state, axis=0)\n        u_optimized_seq[current_idx, :] = np.reshape(ref_u, (1, -1))\n\n        # ##### Optimization runtime (outer loop) ##### #\n        # Get the chunk of trajectory required for the current optimization\n        ref_traj_chunk, ref_u_chunk = get_reference_chunk(\n            reference_traj, reference_u, current_idx, n_mpc_nodes, reference_over_sampling)\n\n        model_ind = quad_mpc.set_reference(x_reference=separate_variables(ref_traj_chunk), u_reference=ref_u_chunk)\n\n        # Optimize control input to reach pre-set target\n        t_opt_init = time.time()\n        w_opt, x_pred = quad_mpc.optimize(use_model=model_ind, return_x=True)\n        mean_opt_time += time.time() - t_opt_init\n\n        # Select first input (one for each motor) - MPC applies only first optimized input to the plant\n        ref_u = np.squeeze(np.array(w_opt[:4]))\n\n        if len(quad_trajectory) > 0 and plot and current_idx > 0:\n            draw_drone_simulation(real_time_artists, quad_trajectory[:current_idx, :], my_quad, targets=None,\n                                  targets_reached=None, pred_traj=x_pred, x_pred_cov=None)\n\n        simulation_time = 1e-8\n\n        # ##### Simulation runtime (inner loop) ##### #\n        while simulation_time < mpc_period:\n            simulation_time += simulation_dt\n            total_sim_time += simulation_dt\n            quad_mpc.simulate(ref_u)\n\n        u_optimized_seq[current_idx, :] = np.reshape(ref_u, (1, -1))\n\n        current_idx += 1\n\n    quad_current_state = my_quad.get_state(quaternion=True, stacked=True)\n    quad_trajectory[-1, :] = np.expand_dims(quad_current_state, axis=0)\n    u_optimized_seq[-1, :] = np.reshape(ref_u, (1, -1))\n\n    # Average elapsed time per optimization\n    mean_opt_time /= current_idx\n\n    rmse = interpol_mse(reference_timestamps, reference_traj[:, :3], reference_timestamps, quad_trajectory[:, :3])\n    max_vel = np.max(np.sqrt(np.sum(reference_traj[:, 7:10] ** 2, 1)))\n\n    with_gp = ' + GP ' if quad_mpc.gp_ensemble is not None else ' - GP '\n    title = r'$v_{max}$=%.2f m/s | RMSE: %.4f m | %s ' % (max_vel, float(rmse), with_gp)\n\n    if plot:\n        trajectory_tracking_results(reference_timestamps, reference_traj, quad_trajectory,\n                                    reference_u, u_optimized_seq, title)\n\n    return rmse, max_vel, mean_opt_time\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--model_version\", type=str, default=\"\", nargs=\"+\",\n                        help=\"Versions to load for the regression models. By default it is an 8 digit git hash.\"\n                             \"Must specify the version for each model separated by spaces.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\", nargs=\"+\",\n                        help=\"Name of the regression models within the specified <model_version> folders. \"\n                             \"Must specify the names for all models separated by spaces.\")\n\n    parser.add_argument(\"--model_type\", type=str, default=\"\", nargs=\"+\",\n                        help=\"Type of regression models (GP or RDRv linear). \"\n                             \"Must be specified for all models separated by spaces.\")\n\n    parser.add_argument(\"--fast\", dest=\"fast\", action=\"store_true\",\n                        help=\"Set to True to run a fast experiment with less velocity samples.\")\n\n    parser.add_argument(\"--print_results\", dest=\"print_results\", action=\"store_true\",\n                        help=\"Print the results data frame.\")\n    parser.set_defaults(print_results=False)\n\n    input_arguments = parser.parse_args()\n\n    globals()['model_num'] = 0\n\n    # Trajectory options\n    traj_type_vec = [{\"random\": 1}, \"loop\", \"lemniscate\"]\n    traj_type_labels = [\"Random\", \"Circle\", \"Lemniscate\"]\n    if input_arguments.fast:\n        av_speed_vec = [[2.0, 3.5],\n                        [2.0, 12.0],\n                        [2.0, 12.0]]\n    else:\n        av_speed_vec = [[2.0, 2.7, 3.0, 3.2, 3.5],\n                        [2.0, 4.5, 7.0, 9.5, 12.0],\n                        [2.0, 4.5, 7.0, 9.5, 12.0]]\n\n    # Model options\n    git_list = input_arguments.model_version\n    name_list = input_arguments.model_name\n    type_list = input_arguments.model_type\n\n    assert len(git_list) == len(name_list) == len(type_list)\n\n    # Simulation options\n    plot_sim = SimpleSimConfig.custom_sim_gui\n    noisy_sim_options = SimpleSimConfig.simulation_disturbances\n    perfect_sim_options = {\"payload\": False, \"drag\": False, \"noisy\": False, \"motor_noise\": False}\n    model_vec = [\n        {\"simulation_options\": perfect_sim_options, \"model\": None},\n        {\"simulation_options\": noisy_sim_options, \"model\": None}]\n\n    legends = ['perfect', 'nominal']\n    for git, m_name, gp_or_rdrv in zip(git_list, name_list, type_list):\n        model_vec += [{\"simulation_options\": noisy_sim_options,\n                       \"model\": {\"version\": git, \"name\": m_name, \"reg_type\": gp_or_rdrv}}]\n        legends += [gp_or_rdrv + \": \" + m_name]\n\n    y_label = \"RMSE [m]\"\n\n    # Define result vectors\n    mse = np.zeros((len(traj_type_vec), len(av_speed_vec[0]), len(model_vec)))\n    v_max = np.zeros((len(traj_type_vec), len(av_speed_vec[0])))\n    t_opt = np.zeros((len(traj_type_vec), len(av_speed_vec[0]), len(model_vec)))\n\n    for n_train_id, model_type in enumerate(model_vec):\n\n        for traj_id, traj_type in enumerate(traj_type_vec):\n\n            for v_id, speed in enumerate(av_speed_vec[traj_id]):\n                if model_type[\"model\"] is not None:\n                    custom_mpc = prepare_quadrotor_mpc(model_type[\"simulation_options\"], **model_type[\"model\"])\n                else:\n                    custom_mpc = prepare_quadrotor_mpc(model_type[\"simulation_options\"])\n\n                traj_params = {\"av_speed\": speed, \"reference_type\": traj_type, \"plot\": plot_sim}\n\n                mse[traj_id, v_id, n_train_id], traj_v, opt_dt = main(custom_mpc, **traj_params)\n                t_opt[traj_id, v_id, n_train_id] += opt_dt\n\n                if v_max[traj_id, v_id] == 0:\n                    v_max[traj_id, v_id] = traj_v\n\n    _, err_file, v_file, t_file = get_experiment_files()\n    np.save(err_file, mse)\n    np.save(v_file, v_max)\n    np.save(t_file, t_opt)\n\n    # from src.utils.visualization import load_past_experiments\n    # _, mse, v_max, t_opt = load_past_experiments()\n\n    mse_tracking_experiment_plot(v_max, mse, traj_type_labels, model_vec, legends, [y_label], t_opt=t_opt, font_size=12)\n\n    if input_arguments.print_results:\n        import pandas as pd\n        pd.options.display.float_format = \"{:,.2f}\".format\n        track_dfs = []\n        for i, track in enumerate(traj_type_labels):\n            index = pd.MultiIndex.from_arrays([[track]*len(av_speed_vec[i]), av_speed_vec[i], v_max[i]],\n                                              names=['Track', 'v_avg', 'v_max'])\n            track_df = pd.DataFrame(mse[i].T*1000, columns=index, index=legends)\n            track_dfs.append(track_df)\n        track_dfs = pd.concat(track_dfs, axis=1)\n        print(track_dfs.to_string())\n\n        pd.options.display.float_format = \"{:,.3f}\".format\n        t_df = pd.DataFrame(np.mean(np.mean(t_opt, axis=0), axis=0)*1000, columns=['avg'], index=legends)\n        print(t_df.to_string())\n"
  },
  {
    "path": "ros_dd_mpc/src/experiments/point_tracking_and_record.py",
    "content": "\"\"\" Executes aggressive maneuvers for collecting flight data on the Simplified Simulator to later train models on.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport sys\nimport time\nimport copy\nimport argparse\nimport itertools\n\nimport pandas as pd\nimport numpy as np\nimport casadi as cs\n\nfrom src.utils.visualization import draw_drone_simulation, initialize_drone_plotter\nfrom src.experiments.comparative_experiment import prepare_quadrotor_mpc\nfrom src.utils.utils import safe_mknode_recursive, jsonify, euclidean_dist, get_data_dir_and_file\nfrom config.configuration_parameters import SimpleSimConfig\n\n\ndef get_record_file_and_dir(record_dict_template, recording_options, simulation_setup, overwrite=True):\n    dataset_name = recording_options[\"dataset_name\"]\n    training_split = recording_options[\"training_split\"]\n\n    # Directory and file name for data recording\n    rec_file_dir, rec_file_name = get_data_dir_and_file(dataset_name, training_split, simulation_setup)\n\n    overwritten = safe_mknode_recursive(rec_file_dir, rec_file_name, overwrite=overwrite)\n\n    rec_dict = copy.deepcopy(record_dict_template)\n    rec_file = os.path.join(rec_file_dir, rec_file_name)\n    if overwrite or (not overwrite and not overwritten):\n        for key in rec_dict.keys():\n            rec_dict[key] = jsonify(rec_dict[key])\n\n        df = pd.DataFrame(rec_dict)\n        df.to_csv(rec_file, index=False, header=True)\n\n        rec_dict = copy.deepcopy(record_dict_template)\n\n    return rec_dict, rec_file\n\n\ndef make_record_dict(state_dim):\n    blank_recording_dict = {\n            \"state_in\": np.zeros((0, state_dim)),\n            \"state_ref\": np.zeros((0, state_dim)),\n            \"error\": np.zeros((0, state_dim)),\n            \"input_in\": np.zeros((0, 4)),\n            \"state_out\": np.zeros((0, state_dim)),\n            \"state_pred\": np.zeros((0, state_dim)),\n            \"timestamp\": np.zeros((0, 1)),\n            \"dt\": np.zeros((0, 1)),\n    }\n    return blank_recording_dict\n\n\ndef check_out_data(rec_dict, state_final, x_pred, w_opt, dt):\n    rec_dict[\"dt\"] = np.append(rec_dict[\"dt\"], dt)\n    rec_dict[\"input_in\"] = np.append(rec_dict[\"input_in\"], w_opt[np.newaxis, :4], axis=0)\n    rec_dict[\"state_out\"] = np.append(rec_dict[\"state_out\"], state_final, 0)\n\n    if x_pred is not None:\n        err = state_final - x_pred\n        rec_dict[\"error\"] = np.append(rec_dict[\"error\"], err, axis=0)\n        rec_dict[\"state_pred\"] = np.append(rec_dict[\"state_pred\"], x_pred[np.newaxis, :], axis=0)\n\n    return rec_dict\n\n\ndef sample_random_target(x_current, world_radius, aggressive=True):\n    \"\"\"\n    Creates a new target point to reach.\n    :param x_current: current position of the quad. Only used if aggressive=True\n    :param world_radius: radius of the area where points are sampled from\n    :param aggressive: if aggressive=True, points will be sampled away from the current position. If False, then points\n    will be sampled uniformly from the world area.\n    :return: new sampled target point. A 3-dimensional numpy array.\n    \"\"\"\n\n    if aggressive:\n\n        # Polar 3D coordinates\n        theta = np.random.uniform(0, 2 * np.pi, 1)\n        psi = np.random.uniform(0, 2 * np.pi, 1)\n        r = 1 * world_radius + np.random.uniform(-0.5, 0.5, 1) * world_radius\n\n        # Transform to cartesian\n        x = r * np.sin(theta) * np.cos(psi)\n        y = r * np.sin(theta) * np.sin(psi)\n        z = r * np.cos(theta)\n\n        return x_current + np.array([x, y, z]).reshape((1, 3))\n\n    else:\n        return np.random.uniform(-world_radius, world_radius, (1, 3))\n\n\ndef main(model_options, recording_options, simulation_options, parameters):\n\n    world_radius = 3\n\n    if parameters[\"initial_state\"] is None:\n        initial_state = [0.0, 0.0, 0.0] + [1, 0, 0, 0] + [0, 0, 0] + [0, 0, 0]\n    else:\n        initial_state = parameters[\"initial_state\"]\n    sim_starting_pos = initial_state\n    quad_current_state = sim_starting_pos\n\n    if parameters[\"preset_targets\"] is not None:\n        targets = parameters[\"preset_targets\"]\n    else:\n        targets = sample_random_target(np.array(initial_state[:3]), world_radius,\n                                       aggressive=recording_options[\"recording\"])\n\n    quad_mpc = prepare_quadrotor_mpc(simulation_options, **model_options, t_horizon=0.5,\n                                     q_mask=np.array([1, 1, 1, 0.01, 0.01, 0.01, 1, 1, 1, 0, 0, 0]))\n\n    # Recover some necessary variables from the MPC object\n    my_quad = quad_mpc.quad\n    n_mpc_nodes = quad_mpc.n_nodes\n    t_horizon = quad_mpc.t_horizon\n    simulation_dt = quad_mpc.simulation_dt\n    reference_over_sampling = 3\n    control_period = t_horizon / (n_mpc_nodes * reference_over_sampling)\n\n    my_quad.set_state(quad_current_state)\n\n    # Real time plot params\n    n_forward_props = n_mpc_nodes\n    plot_sim_traj = False\n\n    x_pred = None\n    w_opt = None\n    initial_guess = None\n\n    # The optimization should be faster or equal than the duration of the optimization time step\n    assert control_period <= t_horizon / n_mpc_nodes\n\n    state = quad_mpc.get_state()\n\n    # ####### Recording mode code ####### #\n    recording = recording_options[\"recording\"]\n    state_dim = state.shape[0]\n    blank_recording_dict = make_record_dict(state_dim)\n\n    # Get recording file and directory\n    if recording:\n        if parameters[\"real_time_plot\"]:\n            parameters[\"real_time_plot\"] = False\n            print(\"Turned off real time plot during recording mode.\")\n\n        rec_dict, rec_file = get_record_file_and_dir(blank_recording_dict, recording_options, simulation_options)\n\n    else:\n        rec_dict = rec_file = None\n\n    # Generate necessary art pack for real time plot\n    if parameters[\"real_time_plot\"]:\n        real_time_art_pack = initialize_drone_plotter(n_props=n_forward_props, quad_rad=my_quad.x_f,\n                                                      world_rad=world_radius)\n    else:\n        real_time_art_pack = None\n\n    start_time = time.time()\n    simulation_time = 0.0\n\n    # Simulation tracking stuff\n    targets_reached = np.array([False for _ in targets])\n    quad_trajectory = np.array(quad_current_state).reshape(1, -1)\n\n    n_iteration_count = 0\n\n    print(\"Targets reached: \", end='')\n    # All targets loop\n    while False in targets_reached and (time.time() - start_time) < parameters[\"max_sim_time\"]:\n        current_target_i = np.where(targets_reached == False)[0][0]\n        current_target = targets[current_target_i]\n        current_target_reached = False\n\n        quad_target_state = [list(current_target), [1, 0, 0, 0], [0, 0, 0], [0, 0, 0]]\n        model_ind = quad_mpc.set_reference(quad_target_state)\n\n        # Provide an initial guess without the uncertainty prop.\n        if initial_guess is None:\n            initial_guess = quad_mpc.optimize(use_model=model_ind)\n            initial_guess = quad_mpc.reshape_input_sequence(initial_guess)\n\n        # MPC loop\n        while not current_target_reached and (time.time() - start_time) < parameters[\"max_sim_time\"]:\n\n            # Emergency recovery (quad controller gone out of control lol)\n            if np.any(state[7:10] > 14) or n_iteration_count > 100:\n                n_iteration_count = 0\n                my_quad.set_state(list(itertools.chain.from_iterable(quad_target_state)))\n\n            state = quad_mpc.get_state()\n            if recording:\n                rec_dict[\"state_in\"] = np.append(rec_dict[\"state_in\"], state.T, 0)\n                rec_dict[\"timestamp\"] = np.append(rec_dict[\"timestamp\"], time.time() - start_time)\n                stacked_ref = np.array(list(itertools.chain.from_iterable(quad_target_state)))[np.newaxis, :]\n                rec_dict[\"state_ref\"] = np.append(rec_dict[\"state_ref\"], stacked_ref, 0)\n                if simulation_time != 0.0:\n                    rec_dict = check_out_data(rec_dict, state.T, x_pred, w_opt, simulation_time)\n\n            simulation_time = 0.0\n\n            # Optimize control input to reach pre-set target\n            w_opt, x_pred_horizon = quad_mpc.optimize(use_model=model_ind, return_x=True)\n            if np.any(w_opt > (my_quad.max_input_value + 0.01)) or np.any(w_opt < (my_quad.min_input_value - 0.01)):\n                print(\"MPC constraints were violated\")\n            initial_guess = quad_mpc.reshape_input_sequence(w_opt)\n            # Save initial guess for future optimization. It is a time-shift of the current optimized variables\n            initial_guess = np.array(cs.vertcat(initial_guess[1:, :], cs.DM.zeros(4).T))\n\n            # Select first input (one for each motor) - MPC applies only first optimized input to the plant\n            ref_u = np.squeeze(np.array(w_opt[:4]))\n\n            if recording:\n                # Integrate first input. Will be used as nominal model prediction during next save\n                x_pred, _ = quad_mpc.forward_prop(np.squeeze(state), w_opt=w_opt[:4],\n                                                  t_horizon=control_period, use_gp=False)\n                x_pred = x_pred[-1, :]\n\n            if parameters[\"real_time_plot\"]:\n                prop_params = {\"x_0\": np.squeeze(state), \"w_opt\": w_opt, \"use_model\": model_ind, \"t_horizon\": t_horizon}\n                x_int, _ = quad_mpc.forward_prop(**prop_params, use_gp=False)\n                if plot_sim_traj:\n                    x_sim = quad_mpc.simulate_plant(quad_mpc.reshape_input_sequence(w_opt))\n                else:\n                    x_sim = None\n                draw_drone_simulation(real_time_art_pack, quad_trajectory, my_quad, targets,\n                                      targets_reached, x_sim, x_int, x_pred_horizon, follow_quad=False)\n\n            while simulation_time < control_period:\n\n                # Simulation runtime (inner loop)\n                simulation_time += simulation_dt\n                quad_mpc.simulate(ref_u)\n\n                quad_current_state = quad_mpc.get_state()\n\n                # Target is reached\n                if euclidean_dist(current_target[0:3], quad_current_state[0:3, 0], thresh=0.05):\n                    print(\"*\", end='')\n                    sys.stdout.flush()\n                    n_iteration_count = 0\n\n                    # Check out data immediately as new target will be optimized in next step\n                    if recording and len(rec_dict['state_in']) > len(rec_dict['input_in']):\n                        x_pred, _ = quad_mpc.forward_prop(np.squeeze(state), w_opt=w_opt[:4], t_horizon=simulation_time,\n                                                          use_gp=False)\n                        x_pred = x_pred[-1, :]\n                        rec_dict = check_out_data(rec_dict, quad_mpc.get_state().T, x_pred, w_opt, simulation_time)\n\n                    # Reset optimization time -> Ask for new optimization for next target in next dt\n                    simulation_time = 0.0\n\n                    # Mark current target as reached\n                    current_target_reached = True\n                    targets_reached[current_target_i] = True\n\n                    # Remove initial guess\n                    initial_guess = None\n\n                    if parameters[\"preset_targets\"] is None:\n                        new_target = sample_random_target(quad_current_state[:3], world_radius, aggressive=recording)\n                        targets = np.append(targets, new_target, axis=0)\n                        targets_reached = np.append(targets_reached, False)\n\n                    # Reset PID integral and past errors\n                    quad_mpc.reset()\n\n                    break\n\n            n_iteration_count += 1\n\n            if parameters[\"real_time_plot\"]:\n                quad_trajectory = np.append(quad_trajectory, quad_current_state.T, axis=0)\n                if len(quad_trajectory) > 300:\n                    quad_trajectory = np.delete(quad_trajectory, 0, 0)\n\n        # Current target was reached. Remove incomplete recordings\n        if recording:\n            if len(rec_dict['state_in']) > len(rec_dict['input_in']):\n                rec_dict[\"state_in\"] = rec_dict[\"state_in\"][:-1]\n                rec_dict[\"timestamp\"] = rec_dict[\"timestamp\"][:-1]\n                rec_dict[\"state_ref\"] = rec_dict[\"state_ref\"][:-1]\n\n            for key in rec_dict.keys():\n                rec_dict[key] = jsonify(rec_dict[key])\n\n            df = pd.DataFrame(rec_dict)\n            df.to_csv(rec_file, index=True, mode='a', header=False)\n\n            rec_dict = copy.deepcopy(blank_recording_dict)\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--model_version\", type=str, default=\"\",\n                        help=\"Version to load for the regression models. By default it is an 8 digit git hash.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name of the regression model within the specified <model_version> folder.\")\n\n    parser.add_argument(\"--model_type\", type=str, default=\"gp\", choices=[\"gp\", \"rdrv\"],\n                        help=\"Type of regression model (GP or RDRv linear)\")\n\n    parser.add_argument(\"--recording\", dest=\"recording\", action=\"store_true\",\n                        help=\"Set to True to enable recording mode.\")\n    parser.set_defaults(recording=False)\n\n    parser.add_argument(\"--dataset_name\", type=str, default=\"simplified_sim_dataset\",\n                        help=\"Name for the generated dataset.\")\n\n    parser.add_argument(\"--test_split\", dest=\"test_split\", action=\"store_true\",\n                        help=\"If the data set is the training or test split.\")\n    parser.set_defaults(test_split=False)\n\n    parser.add_argument(\"--simulation_time\", type=float, default=300,\n                        help=\"Total duration of the simulation in seconds.\")\n\n    args = parser.parse_args()\n\n    np.random.seed(123 if args.test_split else 456)\n\n    acados_config = {\n        \"solver_type\": \"SQP\",\n        \"terminal_cost\": True\n    }\n\n    run_options = {\n        \"model_options\": {\n            \"version\": args.model_version,\n            \"name\": args.model_name,\n            \"reg_type\": args.model_type,\n            \"quad_name\": \"my_quad\"\n        },\n        \"recording_options\": {\n            \"recording\": args.recording,\n            \"dataset_name\": args.dataset_name,\n            \"training_split\": not args.test_split,\n        },\n        \"simulation_options\": SimpleSimConfig.simulation_disturbances,\n        \"parameters\": {\n            \"real_time_plot\": SimpleSimConfig.custom_sim_gui,\n            \"max_sim_time\": args.simulation_time,\n            \"preset_targets\": None,\n            \"initial_state\": None,\n            \"acados_options\": acados_config\n        }\n    }\n\n    main(**run_options)\n"
  },
  {
    "path": "ros_dd_mpc/src/experiments/trajectory_test.py",
    "content": "\"\"\" Tracks a specified trajectory on the simplified simulator using the data-augmented MPC.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport time\nimport json\nimport argparse\nimport numpy as np\nfrom tqdm import tqdm\nfrom src.utils.utils import separate_variables\nfrom src.utils.quad_3d_opt_utils import get_reference_chunk\nfrom src.utils.trajectories import loop_trajectory, lemniscate_trajectory, check_trajectory\nfrom src.utils.visualization import initialize_drone_plotter, draw_drone_simulation, trajectory_tracking_results\nfrom src.experiments.comparative_experiment import prepare_quadrotor_mpc\nfrom config.configuration_parameters import SimpleSimConfig\n\n\ndef main(args):\n    params = {\n        \"version\": args.model_version,\n        \"name\": args.model_name,\n        \"reg_type\": args.model_type,\n        \"quad_name\": \"my_quad\"\n    }\n\n    # Load the disturbances for the custom offline simulator.\n    simulation_options = SimpleSimConfig.simulation_disturbances\n\n    debug_plots = SimpleSimConfig.pre_run_debug_plots\n    tracking_results_plot = SimpleSimConfig.result_plots\n    sim_gui = SimpleSimConfig.custom_sim_gui\n\n    quad_mpc = prepare_quadrotor_mpc(simulation_options, **params)\n\n    # Recover some necessary variables from the MPC object\n    my_quad = quad_mpc.quad\n    n_mpc_nodes = quad_mpc.n_nodes\n    t_horizon = quad_mpc.t_horizon\n    simulation_dt = quad_mpc.simulation_dt\n    reference_over_sampling = 5\n    control_period = t_horizon / (n_mpc_nodes * reference_over_sampling)\n\n    if args.trajectory == \"loop\":\n        reference_traj, reference_timestamps, reference_u = loop_trajectory(\n            my_quad, control_period, radius=args.trajectory_radius, z=1, lin_acc=args.acceleration, clockwise=True,\n            yawing=False, v_max=args.max_speed, map_name=None, plot=debug_plots)\n\n    elif args.trajectory == \"lemniscate\":\n        reference_traj, reference_timestamps, reference_u = lemniscate_trajectory(\n            my_quad, control_period, radius=args.trajectory_radius, z=1, lin_acc=args.acceleration, clockwise=True,\n            yawing=False, v_max=args.max_speed, map_name=None, plot=debug_plots)\n\n    else:\n        raise ValueError(\"Unknown trajectory {}. Options are `lemniscate` and `loop`\".format(args.trajectory))\n\n    if not check_trajectory(reference_traj, reference_u, reference_timestamps, debug_plots):\n        return\n\n    # Set quad initial state equal to the initial reference trajectory state\n    quad_current_state = reference_traj[0, :].tolist()\n    my_quad.set_state(quad_current_state)\n\n    real_time_artists = None\n    if sim_gui:\n        # Initialize real time plot stuff\n        world_radius = np.max(np.abs(reference_traj[:, :2])) * 1.2\n        real_time_artists = initialize_drone_plotter(n_props=n_mpc_nodes, quad_rad=my_quad.length,\n                                                     world_rad=world_radius, full_traj=reference_traj)\n\n    ref_u = reference_u[0, :]\n    quad_trajectory = np.zeros((len(reference_timestamps), len(quad_current_state)))\n    u_optimized_seq = np.zeros((len(reference_timestamps), 4))\n\n    # Sliding reference trajectory initial index\n    current_idx = 0\n\n    # Measure the MPC optimization time\n    mean_opt_time = 0.0\n\n    # Measure total simulation time\n    total_sim_time = 0.0\n\n    print(\"\\nRunning simulation...\")\n    for current_idx in tqdm(range(reference_traj.shape[0])):\n\n        quad_current_state = my_quad.get_state(quaternion=True, stacked=True)\n\n        quad_trajectory[current_idx, :] = np.expand_dims(quad_current_state, axis=0)\n\n        # ##### Optimization runtime (outer loop) ##### #\n        # Get the chunk of trajectory required for the current optimization\n        ref_traj_chunk, ref_u_chunk = get_reference_chunk(\n            reference_traj, reference_u, current_idx, n_mpc_nodes, reference_over_sampling)\n\n        # Set the reference for the OCP\n        model_ind = quad_mpc.set_reference(x_reference=separate_variables(ref_traj_chunk), u_reference=ref_u_chunk)\n\n        # Optimize control input to reach pre-set target\n        t_opt_init = time.time()\n        w_opt, x_pred = quad_mpc.optimize(use_model=model_ind, return_x=True)\n        mean_opt_time += time.time() - t_opt_init\n\n        # Select first input (one for each motor) - MPC applies only first optimized input to the plant\n        ref_u = np.squeeze(np.array(w_opt[:4]))\n        u_optimized_seq[current_idx, :] = np.reshape(ref_u, (1, -1))\n\n        if len(quad_trajectory) > 0 and sim_gui and current_idx > 0:\n            draw_drone_simulation(real_time_artists, quad_trajectory[:current_idx, :], my_quad, targets=None,\n                                  targets_reached=None, pred_traj=x_pred, x_pred_cov=None)\n\n        simulation_time = 0.0\n\n        # ##### Simulation runtime (inner loop) ##### #\n        while simulation_time < control_period:\n            simulation_time += simulation_dt\n            total_sim_time += simulation_dt\n            quad_mpc.simulate(ref_u)\n\n    u_optimized_seq[current_idx, :] = np.reshape(ref_u, (1, -1))\n\n    quad_current_state = my_quad.get_state(quaternion=True, stacked=True)\n    quad_trajectory[-1, :] = np.expand_dims(quad_current_state, axis=0)\n    u_optimized_seq[-1, :] = np.reshape(ref_u, (1, -1))\n\n    # Average elapsed time per optimization\n    mean_opt_time = mean_opt_time / current_idx * 1000\n    tracking_rmse = np.mean(np.sqrt(np.sum((reference_traj[:, :3] - quad_trajectory[:, :3]) ** 2, axis=1)))\n\n    if tracking_results_plot:\n        v_max = np.max(reference_traj[:, 7:10])\n\n        with_gp = ' + GP ' if quad_mpc.gp_ensemble is not None else ' - GP '\n        title = r'$v_{max}$=%.2f m/s | RMSE: %.4f m | %s ' % (v_max, float(tracking_rmse), with_gp)\n        trajectory_tracking_results(reference_timestamps, reference_traj, quad_trajectory, reference_u, u_optimized_seq,\n                                    title)\n\n    v_max_abs = np.max(np.sqrt(np.sum(reference_traj[:, 7:10] ** 2, 1)))\n\n    print(\"\\n:::::::::::::: SIMULATION SETUP ::::::::::::::\\n\")\n    print(\"Simulation: Applied disturbances: \")\n    print(json.dumps(simulation_options))\n    if quad_mpc.gp_ensemble is not None:\n        print(\"\\nModel: Using GP regression model: \", params[\"version\"] + '/' + params[\"name\"])\n    elif quad_mpc.mlp is not None:\n        print(\"\\nModel: Using MLP regression model: \", params[\"version\"] + '/' + params[\"name\"])\n    elif quad_mpc.rdrv is not None:\n        print(\"\\nModel: Using RDRv regression model: \", params[\"version\"] + '/' + params[\"name\"])\n    else:\n        print(\"\\nModel: No regression model loaded\")\n\n    print(\"\\nReference: Executed trajectory\", '`' + args.trajectory + '`', \"with a peak axial velocity of\",\n          args.max_speed, \"m/s, and a maximum speed of %2.3f m/s\" % v_max_abs)\n\n    print(\"\\n::::::::::::: SIMULATION RESULTS :::::::::::::\\n\")\n    print(\"Mean optimization time: %.3f ms\" % mean_opt_time)\n    print(\"Tracking RMSE: %.4f m\\n\" % tracking_rmse)\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--model_version\", type=str, default=\"\",\n                        help=\"Version to load for the regression models. By default it is an 8 digit git hash.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name of the regression model within the specified <model_version> folder.\")\n\n    parser.add_argument(\"--model_type\", type=str, default=\"gp\",\n                        help=\"Type of regression model (GP or RDRv linear)\")\n\n    parser.add_argument(\"--trajectory\", type=str, default=\"loop\", choices=[\"loop\", \"lemniscate\"],\n                        help='path to other necessary data files (eg. vocabularies)')\n\n    parser.add_argument(\"--max_speed\", type=float, default=8,\n                        help=\"Maximum axial speed executed during the flight in m/s. For the `loop` trajectory, \"\n                             \"velocities are feasible up to 14 m/s, and for the `lemniscate` up to 8 m/s\")\n\n    parser.add_argument(\"--acceleration\", type=float, default=1,\n                        help=\"Acceleration of the reference trajectory. Higher accelerations will shorten the execution\"\n                             \"time of the tracking\")\n\n    parser.add_argument(\"--trajectory_radius\", type=float, default=5, help=\"Radius of the reference trajectories\")\n    input_arguments = parser.parse_args()\n\n    main(input_arguments)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/__init__.py",
    "content": ""
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/gp.py",
    "content": "\"\"\" Gaussian Process custom implementation for the data-augmented MPC.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport numpy as np\nimport casadi as cs\nimport joblib\n\nfrom tqdm import tqdm\nfrom operator import itemgetter\nfrom numpy.linalg import inv, cholesky, lstsq\nfrom numpy.random import mtrand\nfrom scipy.optimize import minimize\nfrom scipy.spatial.distance import pdist, cdist, squareform\n\nfrom src.utils.utils import safe_mknode_recursive, make_bz_matrix\n\n\nclass CustomKernelFunctions:\n\n    def __init__(self, kernel_func, params=None):\n\n        self.params = params\n        self.kernel_type = kernel_func\n\n        if self.kernel_type == 'squared_exponential':\n            if params is None:\n                self.params = {'l': [1.0], 'sigma_f': 1.0}\n            self.kernel = self.squared_exponential_kernel\n        else:\n            raise NotImplementedError(\"only squared_exponential is supported\")\n\n        self.theta = self.get_trainable_parameters()\n\n    def __call__(self, x_1, x_2):\n        return self.kernel(x_1, x_2)\n\n    def __str__(self):\n        if self.kernel_type == 'squared_exponential':\n            len_scales = np.reshape(self.params['l'], -1)\n            len_scale_str = '['\n            for i in range(len(len_scales)):\n                len_scale_str += '%.3f, ' % len_scales[i] if i < len(len_scales) - 1 else '%.3f' % len_scales[i]\n            len_scale_str += ']'\n            summary = '%.3f' % self.params['sigma_f']\n            summary += '**2*RBF(length_scale=' + len_scale_str + ')'\n            return summary\n\n        else:\n            raise NotImplementedError(\"only squared_exponential is supported\")\n\n    def get_trainable_parameters(self):\n        trainable_params = []\n        if self.kernel_type == 'squared_exponential':\n            trainable_params += \\\n                np.reshape(np.squeeze(self.params['l']), -1).tolist() if hasattr(self.params['l'], \"__len__\") \\\n                else [self.params['l']]\n            trainable_params += [self.params['sigma_f']]\n        return trainable_params\n\n    @staticmethod\n    def _check_length_scale(x, length_scale):\n        length_scale = np.squeeze(length_scale).astype(float)\n        if np.ndim(length_scale) > 1:\n            raise ValueError(\"length_scale cannot be of dimension greater than 1\")\n        if np.ndim(length_scale) == 1 and x.shape[1] != length_scale.shape[0]:\n            raise ValueError(\"Anisotropic kernel must have the same number of dimensions as data (%d!=%d)\"\n                             % (length_scale.shape[0], x.shape[1]))\n        return length_scale\n\n    def squared_exponential_kernel(self, x_1, x_2=None):\n        \"\"\"\n        Anisotropic (diagonal length-scale) matrix squared exponential kernel. Computes a covariance matrix from points\n        in x_1 and x_2.\n\n        Args:\n            x_1: Array of m points (m x d).\n            x_2: Array of n points (n x d).\n\n        Returns:\n            Covariance matrix (m x n).\n        \"\"\"\n\n        if isinstance(x_2, cs.MX):\n            return self._squared_exponential_kernel_cs(x_1, x_2)\n\n        # Length scale parameter\n        len_scale = self.params['l'] if 'l' in self.params.keys() else 1.0\n\n        # Vertical variation parameter\n        sigma_f = self.params['sigma_f'] if 'sigma_f' in self.params.keys() else 1.0\n\n        x_1 = np.atleast_2d(x_1)\n        length_scale = self._check_length_scale(x_1, len_scale)\n        if x_2 is None:\n            dists = pdist(x_1 / length_scale, metric='sqeuclidean')\n            k = sigma_f * np.exp(-.5 * dists)\n            # convert from upper-triangular matrix to square matrix\n            k = squareform(k)\n            np.fill_diagonal(k, 1)\n        else:\n            dists = cdist(x_1 / length_scale, x_2 / length_scale, metric='sqeuclidean')\n            k = sigma_f * np.exp(-.5 * dists)\n\n        return k\n\n    def _squared_exponential_kernel_cs(self, x_1, x_2):\n        \"\"\"\n        Symbolic implementation of the anisotropic squared exponential kernel\n        :param x_1: Array of m points (m x d).\n        :param x_2: Array of n points (m x d).\n        :return: Covariance matrix (m x n).\n        \"\"\"\n\n        # Length scale parameter\n        len_scale = self.params['l'] if 'l' in self.params.keys() else 1.0\n        # Vertical variation parameter\n        sigma_f = self.params['sigma_f'] if 'sigma_f' in self.params.keys() else 1.0\n\n        if x_1.shape != x_2.shape and x_2.shape[0] == 1:\n            tiling_ones = cs.MX.ones(x_1.shape[0], 1)\n            d = x_1 - cs.mtimes(tiling_ones, x_2)\n            dist = cs.sum2(d ** 2 / cs.mtimes(tiling_ones, cs.MX(len_scale ** 2).T))\n        else:\n            d = x_1 - x_2\n            dist = cs.sum1(d ** 2 / cs.MX(len_scale ** 2))\n\n        return sigma_f * cs.SX.exp(-.5 * dist)\n\n    def diff(self, z, z_train):\n        \"\"\"\n        Computes the symbolic differentiation of the kernel function, evaluated at point z and using the training\n        dataset z_train. This function implements equation (80) from overleaf document, without the c^{v_x} vector,\n        and for all the partial derivatives possible (m), instead of just one.\n\n        :param z: evaluation point. Symbolic vector of length m\n        :param z_train: training dataset. Symbolic matrix of shape n x m\n\n        :return: an m x n matrix, which is the derivative of the exponential kernel function evaluated at point z\n        against the training dataset.\n        \"\"\"\n\n        if self.kernel_type != 'squared_exponential':\n            raise NotImplementedError\n\n        len_scale = self.params['l'] if len(self.params['l']) > 0 else self.params['l'] * cs.MX.ones(z_train.shape[1])\n        len_scale = np.atleast_2d(len_scale ** 2)\n\n        # Broadcast z vector to have the shape of z_train (tile z to to the number of training points n)\n        z_tile = cs.mtimes(cs.MX.ones(z_train.shape[0], 1), z.T)\n\n        # Compute k_zZ. Broadcast it to shape of z_tile and z_train, i.e. by the number of variables in z.\n        k_zZ = cs.mtimes(cs.MX.ones(z_train.shape[1], 1), self.__call__(z_train, z.T).T)\n\n        return - k_zZ * (z_tile - z_train).T / cs.mtimes(cs.MX.ones(z_train.shape[0], 1), len_scale).T\n\n\nclass CustomGPRegression:\n\n    def __init__(self, x_features, u_features, reg_dim, mean=None, y_mean=None, kernel=None, sigma_n=1e-8,\n                 n_restarts=1):\n        \"\"\"\n        :param x_features: list of indices for the quadrotor state-derived features\n        :param u_features: list of indices for the input state-derived features\n        :param reg_dim: state dimension that this regressor is meant to be used for.\n        :param mean: prior mean value\n        :param y_mean: average y value for data normalization\n        :param kernel: Kernel Function object\n        :param sigma_n: noise sigma value\n        :param n_restarts: number of optimization re-initializations\n        \"\"\"\n\n        if kernel is None:\n            kernel = CustomKernelFunctions('squared_exponential')\n\n        # Avoid non-invertible error\n        assert sigma_n != 0.0\n\n        # For inference time\n        self.x_features = x_features\n        self.u_features = u_features\n        self.reg_dim = reg_dim\n\n        self.kernel_ = kernel\n        self.kernel_type = kernel.kernel_type\n\n        # Noise variance prior\n        self.sigma_n = sigma_n\n\n        # GP center of local feature space\n        self.mean = mean\n        self.y_mean = y_mean\n\n        # Pre-computed training data kernel\n        self._K = np.zeros((0, 0))\n        self._K_inv = np.zeros((0, 0))\n        self._K_inv_y = np.zeros((0, ))\n\n        # Training dataset memory\n        self._x_train = np.zeros((0, 0))\n        self._y_train = np.zeros((0, ))\n\n        # CasADi symbolic equivalents\n        self._K_cs = None\n        self._K_inv_cs = None\n        self._K_inv_y_cs = None\n        self._x_train_cs = None\n        self._y_train_cs = None\n\n        self.sym_pred = None\n        self.sym_jacobian_dz = None\n\n        self.n_restarts = n_restarts\n\n    @property\n    def kernel(self):\n        return self.kernel_\n\n    @kernel.setter\n    def kernel(self, ker):\n        self.kernel_ = ker\n\n    @property\n    def K(self):\n        return self._K\n\n    @K.setter\n    def K(self, k):\n        self._K = k\n        self._K_cs = cs.DM(k)\n\n    @property\n    def K_inv(self):\n        return self._K_inv\n\n    @K_inv.setter\n    def K_inv(self, k):\n        self._K_inv = k\n        self._K_inv_cs = cs.DM(k)\n\n    @property\n    def K_inv_y(self):\n        return self._K_inv_y\n\n    @K_inv_y.setter\n    def K_inv_y(self, k):\n        self._K_inv_y = k\n        self._K_inv_y_cs = cs.DM(k)\n\n    @property\n    def x_train(self):\n        return self._x_train\n\n    @x_train.setter\n    def x_train(self, k):\n        self._x_train = k\n        self._x_train_cs = cs.DM(k)\n\n    @property\n    def y_train(self):\n        return self._y_train\n\n    @y_train.setter\n    def y_train(self, k):\n        self._y_train = k\n        self._y_train_cs = cs.DM(k)\n\n    def log_marginal_likelihood(self, theta):\n\n        l_params = np.exp(theta[:-1])\n        sigma_f = np.exp(theta[-1])\n        sigma_n = self.sigma_n\n\n        kernel = CustomKernelFunctions(self.kernel_type, params={'l': l_params, 'sigma_f': sigma_f})\n        k_train = kernel(self.x_train, self.x_train) + sigma_n ** 2 * np.eye(len(self.x_train))\n        l_mat = cholesky(k_train)\n        nll = np.sum(np.log(np.diagonal(l_mat))) + \\\n            0.5 * self.y_train.T.dot(lstsq(l_mat.T, lstsq(l_mat, self.y_train, rcond=None)[0], rcond=None)[0]) + \\\n            0.5 * len(self.x_train) * np.log(2 * np.pi)\n        return nll\n\n    def _nll(self, x_train, y_train):\n        \"\"\"\n        Returns a numerically stable function implementation of the negative log likelihood using the cholesky\n        decomposition of the kernel matrix. http://www.gaussianprocess.org/gpml/chapters/RW2.pdf, Section 2.2,\n        Algorithm 2.1.\n        :param x_train: Array of m points (m x d).\n        :param y_train: Array of m points (m x 1)\n        :return: negative log likelihood (scalar) computing function\n        \"\"\"\n\n        def nll_func(theta):\n\n            l_params = np.exp(theta[:-2])\n            sigma_f = np.exp(theta[-2])\n            sigma_n = np.exp(theta[-1])\n\n            kernel = CustomKernelFunctions(self.kernel_type, params={'l': l_params, 'sigma_f': sigma_f})\n            k_train = kernel(x_train, x_train) + sigma_n ** 2 * np.eye(len(x_train))\n            l_mat = cholesky(k_train)\n            nll = np.sum(np.log(np.diagonal(l_mat))) + \\\n                0.5 * y_train.T.dot(lstsq(l_mat.T, lstsq(l_mat, y_train, rcond=None)[0], rcond=None)[0]) + \\\n                0.5 * len(x_train) * np.log(2 * np.pi)\n            return nll\n\n        return nll_func\n\n    def _constrained_minimization(self, x_train, y_train, x_0, bounds):\n        try:\n            res = minimize(self._nll(x_train, y_train), x0=x_0, bounds=bounds, method='L-BFGS-B')\n            return np.exp(res.x), res.fun\n        except np.linalg.LinAlgError:\n            return x_0, np.inf\n\n    def fit(self, x_train, y_train):\n        \"\"\"\n        Fit a GP regressor to the training dataset by minimizing the negative log likelihood of the training data\n\n        :param x_train: Array of m points (m x d).\n        :param y_train: Array of m points (m x 1)\n        \"\"\"\n\n        initial_guess = self.kernel.get_trainable_parameters()\n        initial_guess += [self.sigma_n]\n        initial_guess = np.array(initial_guess)\n\n        bounds = [(1e-5, 1e1) for _ in range(len(initial_guess) - 1)]\n        bounds = bounds + [(1e-8, 1e0)]\n        log_bounds = np.log(tuple(bounds))\n\n        y_train -= self.y_mean\n\n        optima = [self._constrained_minimization(x_train, y_train, initial_guess, log_bounds)]\n\n        if self.n_restarts > 1:\n            random_state = mtrand._rand\n            for iteration in range(self.n_restarts - 1):\n                theta_initial = random_state.uniform(log_bounds[:, 0], log_bounds[:, 1])\n                optima.append(self._constrained_minimization(x_train, y_train, theta_initial, log_bounds))\n\n        lml_values = list(map(itemgetter(1), optima))\n        theta_opt = optima[int(np.argmin(lml_values))][0]\n\n        # Update kernel with new parameters\n        l_new = theta_opt[:-2]\n        sigma_f_new = theta_opt[-2]\n        self.sigma_n = theta_opt[-1]\n        self.kernel = CustomKernelFunctions(self.kernel_type, params={'l': l_new, 'sigma_f': sigma_f_new})\n\n        # Pre-compute kernel matrices\n        self.K = self.kernel(x_train, x_train) + self.sigma_n ** 2 * np.eye(len(x_train))\n        self.K_inv = inv(self.K)\n        self.K_inv_y = self.K_inv.dot(y_train)\n\n        # Update training dataset points\n        self.x_train = x_train\n        self.y_train = y_train\n\n        self.compute_gp_jac()\n\n    def compute_gp_jac(self):\n\n        self.sym_jacobian_dz = self._linearized_state_estimate()\n\n    def eval_gp_jac(self, z):\n\n        if self.sym_jacobian_dz is None:\n            self.compute_gp_jac()\n\n        return self.sym_jacobian_dz(z)\n\n    def _linearized_state_estimate(self):\n        \"\"\"\n        Computes the symbolic linearization of the GP prediction expected state with respect to the inputs of the GP\n        itself.\n\n        :return: a CasADi function that computes the linearized GP prediction estimate wrt the input features of the\n        GP regressor itself. The output of the function is a vector of shape m, where m is the number of regression\n        features.\n        \"\"\"\n\n        if self.kernel_type != 'squared_exponential':\n            raise NotImplementedError\n\n        # Symbolic variable for input state\n        z = cs.MX.sym('z', self.x_train.shape[1])\n\n        # Compute the kernel derivative:\n        dgpdz = cs.mtimes(self.kernel.diff(z, self._x_train_cs), self._K_inv_y_cs)\n\n        return cs.Function('f', [z], [dgpdz], ['z'], ['dgpdz'])\n\n    def predict(self, x_test, return_std=False, return_cov=False):\n        \"\"\"\n        Computes the sufficient statistics of the GP posterior predictive distribution\n        from m training data X_train and Y_train and n new inputs X_s.\n\n        Args:\n            x_test: test input locations (n x d).\n            return_std: boolean - return a standard deviation vector of size n\n            return_cov: boolean - return a covariance vector of size n (sqrt of standard deviation)\n\n        Returns:\n            Posterior mean vector (n) and covariance diagonal or standard deviation vectors (n) if also requested.\n        \"\"\"\n\n        # Ensure at least n=1\n        x_test = np.atleast_2d(x_test) if isinstance(x_test, np.ndarray) else x_test\n\n        if isinstance(x_test, cs.MX):\n            return self._predict_sym(x_test=x_test, return_std=return_std, return_cov=return_cov)\n\n        if isinstance(x_test, cs.DM):\n            x_test = np.array(x_test).T\n\n        k_s = self.kernel(x_test, self.x_train)\n        k_ss = self.kernel(x_test, x_test) + 1e-8 * np.eye(len(x_test))\n\n        # Posterior mean value\n        mu_s = k_s.dot(self.K_inv_y) + self.y_mean\n\n        # Posterior covariance\n        cov_s = k_ss - k_s.dot(self.K_inv).dot(k_s.T)\n        std_s = np.sqrt(np.diag(cov_s))\n\n        if not return_std and not return_cov:\n            return mu_s\n\n        # Return covariance\n        if return_cov:\n            return mu_s, std_s ** 2\n\n        # Return standard deviation\n        return mu_s, std_s\n\n    def _predict_sym(self, x_test, return_std=False, return_cov=False):\n        \"\"\"\n        Computes the GP posterior mean and covariance at a given a test sample using CasADi symbolics.\n        :param x_test: vector of size mx1, where m is the number of features used for the GP prediction\n\n        :return: the posterior mean (scalar) and covariance (scalar).\n        \"\"\"\n\n        k_s = self.kernel(self._x_train_cs, x_test.T)\n\n        # Posterior mean value\n        mu_s = cs.mtimes(k_s.T, self._K_inv_y_cs) + self.y_mean\n\n        if not return_std and not return_cov:\n            return {'mu': mu_s}\n\n        k_ss = self.kernel(x_test, x_test) + 1e-8 * cs.MX.eye(x_test.shape[1])\n\n        # Posterior covariance\n        cov_s = k_ss - cs.mtimes(cs.mtimes(k_s.T, self._K_inv_cs), k_s)\n        cov_s = cs.diag(cov_s)\n\n        if return_std:\n            return {'mu': mu_s, 'std': np.sqrt(cov_s)}\n\n        return {'mu': mu_s, 'cov': cov_s}\n\n    def sample_y(self, x_test, n_samples=3):\n        \"\"\"\n        Sample a number of functions from the process at the given test points\n\n        :param x_test: test input locations (n x d).\n        :param n_samples: integer, number of samples to draw\n        :return: the drawn samples from the gaussian process. An array of shape n x n_samples\n        \"\"\"\n\n        mu, cov = self.predict(x_test, return_cov=True)\n\n        # Draw three samples from the prior\n        samples = np.random.multivariate_normal(mu.ravel(), np.diag(cov), n_samples)\n\n        return samples.T\n\n    def save(self, path):\n        \"\"\"\n        Saves the current GP regressor to the specified path as a pickle file. Must be re-loaded with the function load\n        :param path: absolute path to save the regressor to\n        \"\"\"\n\n        saved_vars = {\n            \"kernel_params\": self.kernel.params,\n            \"kernel_type\": self.kernel.kernel_type,\n            \"x_train\": self.x_train,\n            \"y_train\": self.y_train,\n            \"k_inv_y\": self.K_inv_y,\n            \"k_inv\": self.K_inv,\n            \"sigma_n\": self.sigma_n,\n            \"reg_dim\": self.reg_dim,\n            \"x_features\": self.x_features,\n            \"u_features\": self.u_features,\n            \"mean\": self.mean,\n            \"y_mean\": self.y_mean\n        }\n\n        split_path = path.split('/')\n        directory = '/'.join(split_path[:-1])\n        file = split_path[-1]\n        safe_mknode_recursive(directory, file, overwrite=True)\n\n        with open(path, 'wb') as f:\n            joblib.dump(saved_vars, f)\n\n    def load(self, data_dict):\n        \"\"\"\n        Load a pre-trained GP regressor\n        :param data_dict: a dictionary with all the pre-trained matrices of the GP regressor\n        \"\"\"\n\n        self.K_inv = data_dict['k_inv']\n        self.K_inv_y = data_dict['k_inv_y']\n        self.x_train = data_dict['x_train']\n        self.y_train = data_dict['y_train']\n        self.kernel_type = data_dict['kernel_type']\n        self.kernel = CustomKernelFunctions(self.kernel_type, data_dict['kernel_params'])\n        self.sigma_n = data_dict['sigma_n']\n        self.mean = data_dict['mean'] if 'mean' in data_dict.keys() else np.array([0, 0, 0])\n        self.y_mean = data_dict['y_mean'] if 'y_mean' in data_dict.keys() else np.array(0)\n        self.compute_gp_jac()\n\n\nclass GPEnsemble:\n    def __init__(self):\n        \"\"\"\n        Sets up a GP regression ensemble. This essentially divides the prediction domain into different GP's, so that\n        less training samples need to be used per GP.\n        \"\"\"\n\n        self.out_dim = 0\n        self.n_models_dict = {}\n\n        # Make index to dim to make dimensions iterable\n        self.dim_idx = np.zeros(0, dtype=int)\n\n        # Dictionary of lists. Each element of the dictionary is indexed by the index of the GP output in the state\n        # space, and contains a with all the GP's (one per cluster) used in that dimension.\n        self.gp = {}\n\n        # Store the centroids of all GP's\n        self.gp_centroids = {}\n\n        # Store the B_z matrices\n        self.B_z_dict = {}\n\n        # Whether the same clustering is used for all dimensions or not\n        self.homogeneous = True\n\n        # Whether the GP model has no ensembles in it (i.e. no GP has more than 1 cluster)\n        self.no_ensemble = True\n\n    @property\n    def n_models(self):\n        if self.homogeneous or self.no_ensemble:\n            return self.n_models_dict[next(iter(self.n_models_dict))]\n        return self.n_models_dict\n\n    @property\n    def B_z(self):\n        return self.B_z_dict[next(iter(self.B_z_dict))] if self.homogeneous else self.B_z_dict\n\n    def add_model(self, gp):\n        \"\"\"\"\n        :param gp: A list of n CustomGPRegression objects, where n is the number of GP's used to divide the feature\n        space domain of the dimension in particular.\n        :type gp: list\n        \"\"\"\n\n        # Check if dimension is already \"occupied\" by another GP\n        gp_dim = gp[0].reg_dim\n        if gp_dim in self.gp.keys():\n            raise ValueError(\"This dimension is already taken by another GP\")\n\n        self.out_dim += 1\n        self.dim_idx = np.append(self.dim_idx, gp_dim)\n\n        self.gp[gp_dim] = np.array(gp)\n\n        # Store centroids and sort along first dimension for easier comparison\n        self.gp_centroids[gp_dim] = np.array([gp_cluster.mean for gp_cluster in gp])\n        sorted_cluster_idx = np.argsort(self.gp_centroids[gp_dim][:, 0])\n        self.gp_centroids[gp_dim] = self.gp_centroids[gp_dim][sorted_cluster_idx]\n        self.gp[gp_dim] = self.gp[gp_dim][sorted_cluster_idx]\n\n        # Calculate if Ensemble is still homogeneous\n        self.homogeneous = self.homogeneous_feature_space()\n\n        # Check if current gp is an actual ensemble\n        self.n_models_dict[gp_dim] = len(gp)\n        if len(gp) > 1:\n            self.no_ensemble = False\n\n        # Pre-compute B_z matrix\n        self.B_z_dict[gp_dim] = make_bz_matrix(x_dims=13, u_dims=4, x_feats=gp[0].x_features, u_feats=gp[0].u_features)\n\n    def get_z(self, x, u, dim):\n        \"\"\"\n        Computes the z features from the x and u vectors, and the target output dimension.\n        :param x: state vector. Shape 13x1. Can be np.array or cs.MX.\n        :param u: control input vector. Shape 4x1. Can be np.array or cs.MX.\n        :param dim: output dimension target.\n        :return: A vector of shape mx1 of the same format as inputs. m is determined by the B_z matrix for dim.\n        \"\"\"\n\n        # Get input feature indices\n        x_feats = self.gp[dim][0].x_features\n        u_feats = self.gp[dim][0].u_features\n\n        # Stack into a single matrix\n        if isinstance(x, np.ndarray):\n            z = np.concatenate((x[x_feats], u[u_feats]), axis=0)\n        elif isinstance(x, cs.MX):\n            z = cs.mtimes(self.B_z_dict[dim], cs.vertcat(x, u))\n        else:\n            raise TypeError\n\n        return z\n\n    def predict(self, x_test, u_test, return_std=False, return_cov=False, return_gp_id=False, return_z=False,\n                progress_bar=False, gp_idx=None):\n        \"\"\"\n        Runs GP inference. First, select the GP optimally for the test samples. Then, run inference on that GP.\n        :param x_test: array of shape d x n. n is the number of test samples and d their dimension.\n        :param u_test: array of shape d' x n. n is the number of test samples and d' their dimension.\n        :param return_std: True if also return the standard deviation of the GP inference.\n        :param return_cov: True if also return the covariance of the GP inference.\n        :param return_gp_id: True if also return the id of the GP used for inference.\n        :param return_z: True if also return the z features computed for inference.\n        :param progress_bar: If True, a progress bar will be shown when evaluating the test data.\n        :param gp_idx: Dictionary of indices with the same length as the GP output dimension indicating which GP to use\n        for each one. If None, select best based on x_test.\n        :type gp_idx: dict\n        :return: m x n arrays, where m is the output dimension and n is the number of samples.\n        \"\"\"\n\n        if return_std:\n            assert not return_cov, \"Can only return the std or the cov\"\n        if return_cov:\n            assert not return_std, \"Can only return the std or the cov\"\n\n        # Dictionary for function return\n        outputs = {}\n\n        # Build regression features and evaluation cluster indices for each GP output dimension\n        z = {}\n        gp_idx = {} if gp_idx is None else gp_idx\n\n        if not self.homogeneous:\n            for dim in self.gp.keys():\n\n                z[dim] = self.get_z(x_test, u_test, dim)\n\n                if dim not in gp_idx.keys():\n                    # Calculate optimal GP clusters to use for each test point\n                    gp_idx[dim] = self.select_gp(z=z[dim], dim=dim)\n                    gp_idx[dim] = np.atleast_1d(gp_idx[dim])\n\n        else:\n            z_ = self.get_z(x_test, u_test, self.dim_idx[0])\n            z = {k: v for k, v in zip(self.dim_idx, [z_] * self.out_dim)}\n\n            if not bool(gp_idx):\n                gp_idx_ = self.select_gp(z=z_, dim=self.dim_idx[0])\n                gp_idx = {k: v for k, v in zip(self.dim_idx, [gp_idx_] * self.out_dim)}\n\n        # Add stuff to output dictionary\n        if return_z:\n            outputs[\"z\"] = z\n        if return_gp_id:\n            outputs[\"gp_id\"] = gp_idx\n\n        pred = []\n        cov_or_std = []\n        noise_prior = []\n\n        # Test data loop\n        range_data = tqdm(range(x_test.shape[1])) if progress_bar else range(x_test.shape[1])\n        for j in range_data:\n\n            pred_j = []\n            cov_or_std_j = []\n            noise_prior_j = []\n\n            # Output dim loop\n            for dim in self.gp.keys():\n                out = self.gp[dim][gp_idx[dim][j]].predict(z[dim][:, j], return_std, return_cov)\n                if not return_std and not return_cov:\n                    if isinstance(out, dict):\n                        pred_j += [out['mu']]\n                    else:\n                        pred_j += [out]\n                else:\n                    if isinstance(out, dict):\n                        pred_j += [out['mu']]\n                        cov_or_std_j += [out['cov'] if 'cov' in out.keys() else out['std']]\n                    else:\n                        pred_j += [out[0]]\n                        cov_or_std_j += [out[1]]\n                    noise_prior_j += [np.array([self.gp[dim][gp_idx[dim][j]].sigma_n])]\n\n            pred += [pred_j]\n            cov_or_std += [cov_or_std_j]\n            noise_prior += [noise_prior_j]\n\n        # Convert to CasADi symbolic or numpy matrix depending on the input type\n        pred = cs.horzcat(*[cs.vertcat(*pred[i]) for i in range(x_test.shape[1])]) \\\n            if isinstance(x_test, cs.MX) else np.squeeze(np.array(pred)).T\n\n        outputs[\"pred\"] = pred\n\n        if not return_std and not return_cov:\n            return outputs\n\n        # Convert to CasADi symbolic or numpy matrix depending on the input type\n        cov_or_std = cs.horzcat(*[cs.vertcat(*cov_or_std[i]) for i in range(x_test.shape[1])]) \\\n            if isinstance(x_test, cs.MX) else np.squeeze(np.array(cov_or_std)).T\n        noise_prior = cs.horzcat(*[cs.vertcat(*noise_prior[i]) for i in range(x_test.shape[1])]) \\\n            if isinstance(x_test, cs.MX) else np.squeeze(np.array(noise_prior)).T\n\n        outputs[\"cov_or_std\"] = cov_or_std\n        outputs[\"noise_cov\"] = noise_prior\n\n        return outputs\n\n    def select_gp(self, dim, x=None, u=None, z=None):\n        \"\"\"\n        Selects the best GP's for computing inference at the given test points x for a given regression output\n        dimension. This calculation is done by computing the distance of all n test points to all available GP's\n        centroids and selecting the one minimizing the Euclidean distance.\n\n        :param z: np array of shape d x n corresponding to the processed feature vector. If unknown one may call this\n        method with x and u instead.\n        :param x: np array of shape 13 x n corresponding to the query quadrotor states. Only necessary if z=None.\n        :param u: np.array of shape 4 x n corresponding to the query quadrotor control vectors. Only necessary if\n        z=None.\n        :param dim: index of GP output dimension. If None, evaluate on all dimensions.\n        :return: a numpy vector of length n, indicating which GP to use for every test sample x.\n        \"\"\"\n\n        if dim is None:\n            dim = self.dim_idx\n\n        if isinstance(dim, np.ndarray):\n            # If the ensemble is homogeneous only one evaluation is necessary\n            if self.homogeneous or self.no_ensemble:\n                return self.select_gp(dim[0], x, u, z)[0]\n\n            return np.array([self.select_gp(i, x, u, z) for i in dim])\n\n        if z is None:\n            z = self.get_z(x, u, dim)\n        z = np.atleast_2d(z)\n\n        centroids = self.gp_centroids[dim]\n\n        # Select subset of features for current dimension\n        return np.argmin(np.sqrt(np.sum((z[np.newaxis, :, :] - centroids[:, :, np.newaxis]) ** 2, 1)), 0)\n\n    def homogeneous_feature_space(self):\n        if self.out_dim == 1:\n            return True\n\n        equal_clusters = True\n        last_centroids = None\n        for i, key in enumerate(self.gp_centroids.keys()):\n            centroids = self.gp_centroids[key]\n            if i == 0:\n                last_centroids = centroids\n                continue\n            if np.any(last_centroids != centroids):\n                equal_clusters = False\n                break\n            last_centroids = centroids\n\n        return equal_clusters\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/gp_common.py",
    "content": "\"\"\" Contains a set of utility functions and classes for the GP training and inference.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport copy\nimport os\n\nimport joblib\nimport numpy as np\nimport pandas as pd\nfrom sklearn.mixture import GaussianMixture\n\nfrom src.model_fitting.gp import GPEnsemble, CustomGPRegression as npGPRegression\nfrom src.utils.utils import undo_jsonify, prune_dataset, safe_mknode_recursive, get_data_dir_and_file, \\\n    separate_variables, v_dot_q, quaternion_inverse\nfrom src.utils.visualization import visualize_data_distribution\n\n\nclass GPDataset:\n    def __init__(self, raw_ds=None,\n                 x_features=None, u_features=None, y_dim=None,\n                 cap=None, n_bins=None, thresh=None,\n                 visualize_data=False):\n\n        self.data_labels = [r'$p_x$', r'$p_y$', r'$p_z$', r'$q_w$', r'$q_x$', r'$q_y$', r'$q_z$',\n                            r'$v_x$', r'$v_y$', r'$v_z$', r'$w_x$', r'$w_y$', r'$w_z$']\n\n        # Raw dataset data\n        self.x_raw = None\n        self.x_out_raw = None\n        self.u_raw = None\n        self.y_raw = None\n        self.x_pred_raw = None\n        self.dt_raw = None\n\n        self.x_features = x_features\n        self.u_features = u_features\n        self.y_dim = y_dim\n\n        # Data pruning parameters\n        self.cap = cap\n        self.n_bins = n_bins\n        self.thresh = thresh\n        self.plot = visualize_data\n\n        # GMM clustering\n        self.pruned_idx = []\n        self.cluster_agency = None\n        self.centroids = None\n\n        # Number of data clusters\n        self.n_clusters = 1\n\n        if raw_ds is not None:\n            self.load_data(raw_ds)\n            self.prune()\n\n    def load_data(self, ds):\n        if isinstance(ds, np.lib.npyio.NpzFile):\n            x_raw = ds['state_in']\n            x_out = ds['state_out']\n            x_pred = ds['state_pred']\n            u_raw = ds['input_in']\n            dt = ds[\"dt\"]\n        else:\n            x_raw = undo_jsonify(ds['state_in'].to_numpy())\n            x_out = undo_jsonify(ds['state_out'].to_numpy())\n            x_pred = undo_jsonify(ds['state_pred'].to_numpy())\n            u_raw = undo_jsonify(ds['input_in'].to_numpy())\n            dt = ds[\"dt\"].to_numpy()\n\n        invalid = np.where(dt == 0)\n\n        # Remove invalid entries (dt = 0)\n        x_raw = np.delete(x_raw, invalid, axis=0)\n        x_out = np.delete(x_out, invalid, axis=0)\n        x_pred = np.delete(x_pred, invalid, axis=0)\n        u_raw = np.delete(u_raw, invalid, axis=0)\n        dt = np.delete(dt, invalid, axis=0)\n\n        # Rotate velocities to body frame and recompute errors\n        x_raw = world_to_body_velocity_mapping(x_raw)\n        x_pred = world_to_body_velocity_mapping(x_pred)\n        x_out = world_to_body_velocity_mapping(x_out)\n        y_err = x_out - x_pred\n\n        # Normalize error by window time (i.e. predict error dynamics instead of error itself)\n        y_err /= np.expand_dims(dt, 1)\n\n        # Select features\n        self.x_raw = x_raw\n        self.x_out_raw = x_out\n        self.u_raw = u_raw\n        self.y_raw = y_err\n        self.x_pred_raw = x_pred\n        self.dt_raw = dt\n\n    def prune(self):\n        # Prune noisy data\n        if self.cap is not None and self.n_bins is not None and self.thresh is not None:\n            x_interest = np.array([7, 8, 9])\n            y_interest = np.array([7, 8, 9])\n\n            labels = [self.data_labels[dim] for dim in np.atleast_1d(y_interest)]\n\n            prune_x_data = self.get_x(pruned=False, raw=True)[:, x_interest]\n            prune_y_data = self.get_y(pruned=False, raw=True)[:, y_interest]\n            self.pruned_idx.append(prune_dataset(prune_x_data, prune_y_data, self.cap, self.n_bins, self.thresh,\n                                                 plot=self.plot, labels=labels))\n\n    def get_x(self, cluster=None, pruned=True, raw=False):\n\n        if cluster is not None:\n            assert pruned\n\n        if raw:\n            return self.x_raw[tuple(self.pruned_idx)] if pruned else self.x_raw\n\n        if self.x_features is not None and self.u_features is not None:\n            x_f = self.x_features\n            u_f = self.u_features\n            data = np.concatenate((self.x_raw[:, x_f], self.u_raw[:, u_f]), axis=1) if u_f else self.x_raw[:, x_f]\n        else:\n            data = np.concatenate((self.x_raw, self.u_raw), axis=1)\n        data = data[:, np.newaxis] if len(data.shape) == 1 else data\n\n        if pruned or cluster is not None:\n            data = data[tuple(self.pruned_idx)]\n            data = data[self.cluster_agency[cluster]] if cluster is not None else data\n\n        return data\n\n    @property\n    def x(self):\n        return self.get_x()\n\n    def get_x_out(self, cluster=None, pruned=True):\n\n        if cluster is not None:\n            assert pruned\n\n        if pruned or cluster is not None:\n            data = self.x_out_raw[tuple(self.pruned_idx)]\n            data = data[self.cluster_agency[cluster]] if cluster is not None else data\n\n            return data\n\n        return self.x_out_raw[tuple(self.pruned_idx)] if pruned else self.x_out_raw\n\n    @property\n    def x_out(self):\n        return self.get_x_out()\n\n    def get_u(self, cluster=None, pruned=True, raw=False):\n\n        if cluster is not None:\n            assert pruned\n\n        if raw:\n            return self.u_raw[tuple(self.pruned_idx)] if pruned else self.u_raw\n\n        data = self.u_raw[:, self.u_features] if self.u_features is not None else self.u_raw\n        data = data[:, np.newaxis] if len(data.shape) == 1 else data\n\n        if pruned or cluster is not None:\n            data = data[tuple(self.pruned_idx)]\n            data = data[self.cluster_agency[cluster]] if cluster is not None else data\n\n        return data\n\n    @property\n    def u(self):\n        return self.get_u()\n\n    def get_y(self, cluster=None, pruned=True, raw=False):\n\n        if cluster is not None:\n            assert pruned\n\n        if raw:\n            return self.y_raw[self.pruned_idx] if pruned else self.y_raw\n\n        data = self.y_raw[:, self.y_dim] if self.y_dim is not None else self.y_raw\n        data = data[:, np.newaxis] if len(data.shape) == 1 else data\n\n        if pruned or cluster is not None:\n            data = data[tuple(self.pruned_idx)]\n            data = data[self.cluster_agency[cluster]] if cluster is not None else data\n\n        return data\n\n    @property\n    def y(self):\n        return self.get_y()\n\n    def get_x_pred(self, pruned=True, raw=False):\n\n        if raw:\n            return self.x_pred_raw[tuple(self.pruned_idx)] if pruned else self.x_pred_raw\n\n        data = self.x_pred_raw[:, self.y_dim] if self.y_dim is not None else self.x_pred_raw\n        data = data[:, np.newaxis] if len(data.shape) == 1 else data\n\n        if pruned:\n            data = data[tuple(self.pruned_idx)]\n\n        return data\n\n    @property\n    def x_pred(self):\n        return self.get_x_pred()\n\n    def get_dt(self, pruned=True):\n\n        return self.dt_raw[tuple(self.pruned_idx)] if pruned else self.dt_raw\n\n    @property\n    def dt(self):\n        return self.get_dt()\n\n    def cluster(self, n_clusters, load_clusters=False, save_dir=\"\", visualize_data=False):\n        self.n_clusters = n_clusters\n\n        x_pruned = self.x\n        y_pruned = self.y\n\n        file_name = os.path.join(save_dir, 'gmm.pkl')\n        loaded = False\n        gmm = None\n        if os.path.exists(file_name) and load_clusters:\n            print(\"Loaded GP clusters from last GP training session. File: {}\".format(file_name))\n            gmm = joblib.load(file_name)\n            if gmm.n_components != n_clusters:\n                print(\"The loaded GP does not coincide with the number of set clusters: Found {} but requested\"\n                      \"is {}\".format(gmm.n_components, n_clusters))\n            else:\n                loaded = True\n        if not loaded:\n            gmm = GaussianMixture(n_clusters).fit(x_pruned)\n        clusters = gmm.predict_proba(x_pruned)\n        centroids = gmm.means_\n\n        if not load_clusters and n_clusters > 1:\n            safe_mknode_recursive(save_dir, 'gmm.pkl', overwrite=True)\n            joblib.dump(gmm, file_name)\n\n        index_aux = np.arange(0, clusters.shape[0])\n        cluster_agency = {}\n\n        # Add to the points corresponding to each cluster the points that correspond to it with top 2 probability,\n        # if this probability is high\n        for cluster in range(n_clusters):\n            top_1 = np.argmax(clusters, 1)\n            clusters_ = copy.deepcopy(clusters)\n            clusters_[index_aux, top_1] *= 0\n            top_2 = np.argmax(clusters_, 1)\n            idx = np.where(top_1 == cluster)[0]\n            idx = np.append(idx, np.where((top_2 == cluster) * (clusters_[index_aux, top_2] > 0.2))[0])\n            cluster_agency[cluster] = idx\n\n        # Only works in len(x_features) >= 3\n        if visualize_data:\n            x_aux = self.get_x(pruned=False)\n            y_aux = self.get_y(pruned=False)\n            visualize_data_distribution(x_aux, y_aux, cluster_agency, x_pruned, y_pruned)\n\n        self.cluster_agency = cluster_agency\n        self.centroids = centroids\n\n\ndef restore_gp_regressors(pre_trained_models):\n    \"\"\"\n    :param pre_trained_models: A dictionary with all the GP models to be restored in 'models'.\n    :return: The GP ensemble restored from the models.\n    :rtype: GPEnsemble\n    \"\"\"\n\n    gp_reg_ensemble = GPEnsemble()\n    # TODO: Deprecate compatibility mode with old models.\n    if all(item in list(pre_trained_models.keys()) for item in [\"x_features\", \"u_features\"]):\n        x_features = pre_trained_models[\"x_features\"]\n        u_features = pre_trained_models[\"u_features\"]\n    else:\n        x_features = u_features = None\n\n    if isinstance(pre_trained_models['models'][0], dict):\n        pre_trained_gp_reg = {}\n        for _, model_dict in enumerate(pre_trained_models['models']):\n            if x_features is not None:\n                gp_reg = npGPRegression(x_features, u_features, model_dict[\"reg_dim\"])\n            else:\n                gp_reg = npGPRegression(model_dict[\"x_features\"], model_dict[\"u_features\"], model_dict[\"reg_dim\"])\n            gp_reg.load(model_dict)\n            if model_dict[\"reg_dim\"] not in pre_trained_gp_reg.keys():\n                pre_trained_gp_reg[model_dict[\"reg_dim\"]] = [gp_reg]\n            else:\n                pre_trained_gp_reg[model_dict[\"reg_dim\"]] += [gp_reg]\n\n        # Add the GP's in a per-output-dim basis into the Ensemble\n        for key in np.sort(list(pre_trained_gp_reg.keys())):\n            gp_reg_ensemble.add_model(pre_trained_gp_reg[key])\n    else:\n        raise NotImplementedError(\"Cannot load this format of GP model.\")\n\n    return gp_reg_ensemble\n\n\ndef read_dataset(name, train_split, sim_options):\n    \"\"\"\n    Attempts to read a dataset given its name and its metadata.\n    :param name: Name of the dataset\n    :param train_split: Whether to load the training split (True) or the test split (False)\n    :param sim_options: Dictionary of metadata of the dataset to be read.\n    :return:\n    \"\"\"\n    data_file = get_data_dir_and_file(name, training_split=train_split, params=sim_options, read_only=True)\n    if data_file is None:\n        raise FileNotFoundError\n    rec_file_dir, rec_file_name = data_file\n    if rec_file_name.startswith('npz'):\n        rec_file_name = rec_file_name[:-4] + '.npz'\n        rec_file = os.path.join(rec_file_dir, rec_file_name)\n        ds = np.load(rec_file)\n    else:\n        rec_file = os.path.join(rec_file_dir, rec_file_name)\n        ds = pd.read_csv(rec_file)\n\n    return ds\n\n\ndef world_to_body_velocity_mapping(state_sequence):\n    \"\"\"\n\n    :param state_sequence: N x 13 state array, where N is the number of states in the sequence.\n    :return: An N x 13 sequence of states, but where the velocities (assumed to be in positions 7, 8, 9) have been\n    rotated from world to body frame. The rotation is made using the quaternion in positions 3, 4, 5, 6.\n    \"\"\"\n\n    p, q, v_w, w = separate_variables(state_sequence)\n    v_b = []\n    for i in range(len(q)):\n        v_b.append(v_dot_q(v_w[i], quaternion_inverse(q[i])))\n    v_b = np.stack(v_b)\n    return np.concatenate((p, q, v_b, w), 1)"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/gp_fitting.py",
    "content": "\"\"\" Executable script to train a custom Gaussian Process on recorded flight data.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport time\nimport argparse\nimport subprocess\nimport numpy as np\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\nfrom src.utils.utils import safe_mkdir_recursive, load_pickled_models\nfrom src.utils.utils import distance_maximizing_points, get_model_dir_and_file\nfrom src.utils.utils import sample_random_points\nfrom src.model_fitting.gp import CustomKernelFunctions as npKernelFunctions\nfrom src.model_fitting.gp import CustomGPRegression as npGPRegression\nfrom src.model_fitting.gp import GPEnsemble\nfrom src.model_fitting.gp_common import GPDataset, restore_gp_regressors, read_dataset\nfrom src.model_fitting.gp_visualization import gp_visualization_experiment\nfrom config.configuration_parameters import ModelFitConfig as Conf\n\n\ndef plot_gp_regression(x_test, y_test, x_train, y_train, gp_mean, gp_std, gp_regressor, labels, title='', n_samples=3):\n\n    # Assert the number of provided labels is coherent with the feature dimension [1] of the x vectors\n    if len(x_test.shape) == 1:\n        x_test = np.expand_dims(x_test, 1)\n        n_subplots = 1\n    else:\n        n_subplots = x_test.shape[1]\n\n    assert len(labels) == x_test.shape[1]\n\n    # Generate samples if a gp_regressor is provided\n    if gp_regressor is not None:\n        # Sample from GP & plot samples\n        y_samples = gp_regressor.sample_y(x_test, n_samples)\n        y_samples = np.squeeze(y_samples)\n    else:\n        y_samples = None\n\n    for i in range(n_subplots):\n        plt.subplot(n_subplots, 1, i + 1)\n\n        # Sort x axis values\n        x_sort_ind_test = np.argsort(x_test[:, i])\n\n        # Plot gp mean line\n        plt.plot(x_test[x_sort_ind_test, i], gp_mean[x_sort_ind_test], 'k', lw=3, zorder=9)\n\n        # Plot gp std area\n        plt.fill_between(x_test[x_sort_ind_test, i],\n                         gp_mean[x_sort_ind_test] - 3 * gp_std[x_sort_ind_test],\n                         gp_mean[x_sort_ind_test] + 3 * gp_std[x_sort_ind_test],\n                         alpha=0.2, color='k')\n\n        if y_samples is not None:\n            plt.plot(x_test[x_sort_ind_test, i], y_samples[x_sort_ind_test], '-o', lw=1)\n            plt.xlim(min(x_test[:, i]), max(x_test[:, i]))\n            plt.ylim(min(np.min(y_samples), np.min(gp_mean - 3 * gp_std)),\n                     max(np.max(y_samples), np.max(gp_mean + 3 * gp_std)))\n\n        if x_train is not None and y_train is not None:\n            plt.scatter(x_train[:, i], y_train, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))\n\n        if y_test is not None:\n            plt.plot(x_test[x_sort_ind_test, i], y_test[x_sort_ind_test], lw=1, marker='o')\n\n        if i == 0 and title != '':\n            plt.title(title, fontsize=12)\n\n        plt.ylabel(labels[i])\n\n    plt.tight_layout()\n\n\ndef gp_train_and_save(x, y, gp_regressors, save_model, save_file, save_path, y_dims, cluster_n, progress_bar=True):\n    \"\"\"\n    Trains and saves the 'm' GP's in the gp_regressors list. Each of these regressors will predict on one of the output\n    variables only.\n\n    :param x: Feature variables for the regression training. A list of length m where each entry is a Nxn dataset, N is\n    the number of training samples and n is the feature space dimensionality. Each entry of this list will be used to\n    train the respective GP.\n    :param y: Output variables for the regression training. A list of length m where each entry is a N array, N is the\n    number of training samples. Each entry of this list will be used as ground truth output for the respective GP.\n    :param gp_regressors: List of m GPRegression objects (npGPRegression or skGPRegression)\n    :param save_model: Bool. Whether to save the trained models or not.\n    :param save_file: Name of file where to save the model. The 'pkl' extension will be added automatically.\n    :param save_path: Path where to store the trained model pickle file.\n    :param y_dims: List of length m, where each entry represents the state index that corresponds to each GP.\n    :param cluster_n: Number (int) of the cluster currently being trained.\n    :param progress_bar: Bool. Whether to visualize a progress bar or not.\n    :return: the same list as te input gp_regressors variable, but each GP has been trained and saved if requested.\n    \"\"\"\n\n    # If save model, generate saving directory\n    if save_model:\n        safe_mkdir_recursive(save_path)\n\n    if progress_bar:\n        print(\"Training {} gp regression models\".format(len(y_dims)))\n    prog_range = tqdm(y_dims) if progress_bar else y_dims\n\n    for y_dim_reg, dim in enumerate(prog_range):\n\n        # Fit one regressor for each output dimension\n        gp_regressors[y_dim_reg].fit(x[y_dim_reg], y[y_dim_reg])\n        if save_model:\n            full_path = os.path.join(save_path, save_file + '_' + str(dim) + '_' + str(cluster_n) + '.pkl')\n            gp_regressors[y_dim_reg].save(full_path)\n\n    return gp_regressors\n\n\ndef main(x_features, u_features, reg_y_dim, quad_sim_options, dataset_name,\n         x_cap, hist_bins, hist_thresh,\n         n_train_points=50, n_restarts=10, n_clusters=1, load_clusters=False, model_name=\"model\",\n         dense_gp_name=\"model\", dense_gp_points=100, dense_gp_version=\"\", use_dense=False,\n         visualize_data=False, visualize_model=False):\n\n    \"\"\"\n    Reads the dataset specified and trains a GP model or ensemble on it. The regressed variables is the time-derivative\n    of one of the states.\n    The feature and regressed variables are identified through the index number in the state vector. It is defined as:\n    [0: p_x, 1: p_y, 2:, p_z, 3: q_w, 4: q_x, 5: q_y, 6: q_z, 7: v_x, 8: v_y, 9: v_z, 10: w_x, 11: w_y, 12: w_z]\n    The input vector is also indexed:\n    [0: u_0, 1: u_1, 2: u_2, 3: u_3].\n\n    :param x_features: List of n regression feature indices from the quadrotor state indexing.\n    :type x_features: list\n    :param u_features: List of n' regression feature indices from the input state.\n    :type u_features: list\n    :param reg_y_dim: Index of output dimension being regressed as the time-derivative.\n    :type reg_y_dim: float\n    :param dataset_name: Name of the dataset\n    :param quad_sim_options: Dictionary of metadata of the dataset to be read.\n    :param x_cap: cap value (in absolute value) for dataset pruning. Any input feature that exceeds this number in any\n    dimension will be removed.\n    :param hist_bins: Number of bins used for histogram pruning\n    :param hist_thresh: Any bin with less data percentage than this number will be removed.\n    :param n_train_points: Number of training points used for the current GP training.\n    :param dense_gp_points: Number of training points used for the dense GP training. The dense GP will be loaded if\n    possible.\n    :param n_restarts: Number of restarts of nonlinear optimizer.\n    :param n_clusters: Number of clusters used in GP ensemble. If 1, a normal GP is trained.\n    :param load_clusters: True if attempt to load clusters from last GP training.\n    :param model_name: Given name to the currently trained GP.\n    :param dense_gp_name: Given name to the dense GP in case it needs to be used.\n    :param dense_gp_version: Git hash of the folder where to find the dense GP.\n    :param use_dense: Whether to sample the training set from a dense GP or sample from the training data.\n    :param visualize_data: True if display some plots about the data loading, pruning and training process.\n    :param visualize_model: True if display some performance plots about the trained model.\n    \"\"\"\n\n    # #### Prepare saving directory for GP's #### #\n    # Get git commit hash for saving the model\n    git_version = ''\n    try:\n        git_version = subprocess.check_output(['git', 'describe', '--always']).strip().decode(\"utf-8\")\n    except subprocess.CalledProcessError as e:\n        print(e.returncode, e.output)\n    print(\"The model will be saved using hash: %s\" % git_version)\n\n    gp_name_dict = {\"git\": git_version, \"model_name\": model_name, \"params\": quad_sim_options}\n    save_file_path, save_file_name = get_model_dir_and_file(gp_name_dict)\n\n    # #### DATASET LOADING #### #\n    if isinstance(dataset_name, str):\n        df_train = read_dataset(dataset_name, True, quad_sim_options)\n        gp_dataset = GPDataset(df_train, x_features, u_features, reg_y_dim,\n                               cap=x_cap, n_bins=hist_bins, thresh=hist_thresh, visualize_data=visualize_data)\n    elif isinstance(dataset_name, GPDataset):\n        gp_dataset = dataset_name\n    else:\n        raise TypeError(\"dataset_name must be a string or a GPDataset instance.\")\n\n    # Make clusters for multi-gp prediction\n    gp_dataset.cluster(n_clusters, load_clusters=load_clusters, save_dir=save_file_path, visualize_data=visualize_data)\n\n    # #### LOAD DENSE GP IF USING GP ENSEMBLE #### #\n    if use_dense:\n        load_options = {\"git\": dense_gp_version, \"model_name\": dense_gp_name, \"params\": quad_sim_options}\n        loaded_models = load_pickled_models(model_options=load_options)\n\n        if loaded_models is None:\n            print(\"Model not found. Training a new dense GP with \")\n            # Train model as accurate as possible. If dense_gp_points is specified, train a gp with that amount of\n            # straining samples and use it to generate a dataset for the GP ensemble.\n            dense_gp = main(x_features, u_features, reg_y_dim, quad_sim_options, dataset_name, x_cap, hist_bins,\n                            hist_thresh, n_train_points=dense_gp_points, n_restarts=n_restarts,\n                            model_name=dense_gp_name)\n\n        else:\n            dense_gp = restore_gp_regressors(loaded_models)\n            print(\"Loaded dense GP model from: %s/%s\" % (dense_gp_version, dense_gp_name))\n\n    else:\n        dense_gp = None\n\n    # #### DECLARE SOME PARAMETERS AND VARIABLES #### #\n    # List of trained GP regressors. One for each cluster\n    gp_regressors = []\n\n    # Prior parameters\n    sigma_f = 0.5\n    length_scale = .1\n    sigma_n = 0.01\n\n    gp_params = {\"x_features\": x_features, \"u_features\": u_features, \"reg_dim\": reg_y_dim,\n                 \"sigma_n\": sigma_n, \"n_restarts\": n_restarts}\n\n    # Get all cluster centroids for the current output dimension\n    centroids = gp_dataset.centroids\n    print(\"Training {} cluster model(s)\".format(n_clusters))\n    range_vec = tqdm(range(n_clusters))\n    for cluster in range_vec:\n\n        # #### TRAINING POINT SELECTION #### #\n\n        cluster_mean = centroids[cluster]\n        cluster_x_points = gp_dataset.get_x(cluster=cluster)\n        cluster_y_points = gp_dataset.get_y(cluster=cluster)\n        cluster_u_points = gp_dataset.get_u(cluster=cluster)\n\n        # Select a base set of training points for the current cluster using PCA that are as separate from each\n        # other as possible\n        selected_points = distance_maximizing_points(\n            cluster_x_points, cluster_mean, n_train_points=n_train_points, dense_gp=dense_gp, plot=False)\n\n        cluster_y_mean = np.mean(cluster_y_points, 0)\n\n        # If no dense_gp was provided to the previous function, training_points will be the indices of the training\n        # points to choose from the training set\n        if dense_gp is None:\n            x_train = cluster_x_points[selected_points]\n            y_train = np.squeeze(cluster_y_points[selected_points])\n            training_points = selected_points\n        else:\n            # Generate a new dataset of synthetic data composed of x and y values\n            x_mock = np.zeros((13, selected_points.shape[1]))\n            if x_features:\n                x_mock[np.array(x_features), :] = selected_points[:len(x_features)]\n            u_mock = np.zeros((4, selected_points.shape[1]))\n            if u_features:\n                u_mock[np.array(u_features), :] = selected_points[len(x_features):]\n            out = dense_gp.predict(x_mock, u_mock)\n            out[\"pred\"] = np.atleast_2d(out[\"pred\"])\n            y_train = np.squeeze(out[\"pred\"][np.where(dense_gp.dim_idx == reg_y_dim)])\n            x_train = selected_points.T\n            training_points = []\n\n        # Check if we still haven't used the entirety of the available points\n        n_used_points = x_train.shape[0]\n        if n_used_points < n_train_points and n_used_points < cluster_x_points.shape[0]:\n\n            missing_pts = n_train_points - n_used_points\n\n            training_points = sample_random_points(cluster_x_points, training_points, missing_pts, dense_gp)\n            if dense_gp is None:\n                # Transform from cluster data index to full dataset index\n                x_train = cluster_x_points[training_points]\n                y_train = np.squeeze(cluster_y_points[training_points])\n\n            else:\n                # Generate a new dataset of synthetic data composed of x and y values\n                training_points = training_points.astype(int)\n                x_mock = np.zeros((13, len(training_points)))\n                if x_features:\n                    x_mock[np.array(x_features), :] = cluster_x_points[training_points, :len(x_features)].T\n                u_mock = np.zeros((4, len(training_points)))\n                if u_features:\n                    u_mock[np.array(u_features), :] = cluster_u_points[len(x_features):]\n                out = dense_gp.predict(x_mock, u_mock)\n                y_additional = np.squeeze(out[\"pred\"][np.where(dense_gp.dim_idx == reg_y_dim)])\n                y_train = np.append(y_train, y_additional)\n                x_train = np.concatenate((x_train, cluster_x_points[training_points, :len(x_features)]), axis=0)\n\n        # #### GP TRAINING #### #\n        # Multidimensional input GP regressors\n        l_scale = length_scale * np.ones((x_train.shape[1], 1))\n\n        cluster_mean = centroids[cluster]\n        gp_params[\"mean\"] = cluster_mean\n        gp_params[\"y_mean\"] = cluster_y_mean\n\n        # Train one independent GP for each output dimension\n        exponential_kernel = npKernelFunctions('squared_exponential', params={'l': l_scale, 'sigma_f': sigma_f})\n        gp_regressors.append(npGPRegression(kernel=exponential_kernel, **gp_params))\n        gp_regressors[cluster] = gp_train_and_save([x_train], [y_train], [gp_regressors[cluster]], True, save_file_name,\n                                                   save_file_path, [reg_y_dim], cluster, progress_bar=False)[0]\n\n    if visualize_model:\n        gp_ensemble = GPEnsemble()\n        gp_ensemble.add_model(gp_regressors)\n        x_features = x_features\n        gp_visualization_experiment(quad_sim_options, gp_dataset,\n                                    x_cap, hist_bins, hist_thresh,\n                                    x_features, u_features, reg_y_dim,\n                                    grid_sampling_viz=True, pre_set_gp=gp_ensemble)\n\n\ndef gp_evaluate_test_set(gp_regressors, gp_dataset, pruned=False, timed=False, progress_bar=False):\n    \"\"\"\n    Runs GP prediction on a specified dataset.\n\n    :param gp_regressors: GPEnsemble object\n    :type gp_regressors: GPEnsemble\n    :param gp_dataset: Dataset for evaluation\n    :type gp_dataset: GPDataset\n    :param pruned: Whether to use the pruned data or the raw data from the GPDataset.\n    :param timed: whether to return the elapsed time of the data evaluation or not\n    :param progress_bar: If True, a progress bar will be shown when evaluating the test data.\n    :return: Given n number of samples of dimension d, return:\n        - n x d vector of mean predictions,\n        - n x d vector of std predictions,\n        - n x d vector of ground truth values to compare with the predictions\n    \"\"\"\n\n    x_test = gp_dataset.get_x(pruned=pruned, raw=True)\n    u_test = gp_dataset.get_u(pruned=pruned, raw=True)\n    dt_test = gp_dataset.get_dt(pruned=pruned)\n\n    tic = time.time()\n    out = gp_regressors.predict(x_test.T, u_test.T, return_std=True, progress_bar=progress_bar)\n    mean_post = out[\"pred\"]\n    std_post = out[\"cov_or_std\"]\n    elapsed = time.time() - tic\n    mean_post *= dt_test\n    std_post *= dt_test\n    mean_post = mean_post.T\n    std_post = std_post.T\n\n    if not timed:\n        return mean_post, std_post\n    else:\n        return mean_post, std_post, elapsed\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--n_points\", type=int, default=\"20\",\n                        help=\"Number of training points used to fit the current GP model.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name assigned to the trained model.\")\n\n    parser.add_argument('--x', nargs='+', type=int, default=[7],\n                        help='Regression X variables. Must be a list of integers between 0 and 12. Velocities xyz '\n                             'correspond to indices 7, 8, 9.')\n\n    parser.add_argument('--u', action=\"store_true\",\n                        help='Use the control as input to the model.')\n\n    parser.add_argument(\"--y\", type=int, default=7,\n                        help=\"Regression Y variable. Must be an integer between 0 and 12. Velocities xyz correspond to\"\n                             \"indices 7, 8, 9.\")\n\n    input_arguments = parser.parse_args()\n\n    # Use vx, vy, vz as input features\n    x_feats = input_arguments.x\n    if input_arguments.u:\n        u_feats = [0, 1, 2, 3]\n    else:\n        u_feats = []\n\n    # Regression dimension\n    y_regressed_dim = input_arguments.y\n    n_train_pts = input_arguments.n_points\n    gp_name = input_arguments.model_name\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    histogram_pruning_bins = Conf.histogram_bins\n    histogram_pruning_threshold = Conf.histogram_threshold\n    x_value_cap = Conf.velocity_cap\n\n    gp_dense_name = Conf.dense_model_name\n    gp_id_custom = Conf.dense_model_version\n    dense_n_points = Conf.dense_training_points\n    with_dense = Conf.use_dense_model\n\n    main(x_feats, u_feats, y_regressed_dim, simulation_options, ds_name,\n         x_value_cap, histogram_pruning_bins, histogram_pruning_threshold,\n         model_name=gp_name, n_train_points=n_train_pts,\n         n_clusters=Conf.clusters, load_clusters=Conf.load_clusters,\n         use_dense=with_dense,\n         dense_gp_points=dense_n_points, dense_gp_name=gp_dense_name, dense_gp_version=gp_id_custom,\n         visualize_data=Conf.visualize_data, visualize_model=Conf.visualize_training_result)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/gp_visualization.py",
    "content": "\"\"\" Executable script for visual evaluation of the trained GP quality.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport argparse\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom src.model_fitting.gp_common import GPDataset, restore_gp_regressors, read_dataset\nfrom config.configuration_parameters import ModelFitConfig as Conf\nfrom src.utils.utils import load_pickled_models\nfrom src.utils.visualization import visualize_gp_inference\n\n\ndef gp_visualization_experiment(quad_sim_options, dataset_name,\n                                x_cap, hist_bins, hist_thresh,\n                                x_vis_feats, u_vis_feats, y_vis_feats,\n                                grid_sampling_viz=False,\n                                load_model_version=\"\", load_model_name=\"\", pre_set_gp=None):\n\n    # #### GP ENSEMBLE LOADING #### #\n    if pre_set_gp is None:\n        load_options = {\"git\": load_model_version, \"model_name\": load_model_name, \"params\": quad_sim_options}\n        loaded_models = load_pickled_models(model_options=load_options)\n        if loaded_models is None:\n            raise FileNotFoundError(\"Model not found\")\n        gp_ensemble = restore_gp_regressors(loaded_models)\n    else:\n        gp_ensemble = pre_set_gp\n\n    # #### DATASET LOADING #### #\n    # Pre-set labels of the data:\n    labels_x = [\n        r'$p_x\\:\\left[m\\right]$', r'$p_y\\:\\left[m\\right]$', r'$p_z\\:\\left[m\\right]$',\n        r'$q_w\\:\\left[rad\\right]$', r'$q_x\\:\\left[rad\\right]$', r'$q_y\\:\\left[rad\\right]$', r'$q_z\\:\\left[rad\\right]$',\n        r'$v_x\\:\\left[\\frac{m}{s}\\right]$', r'$v_y\\:\\left[\\frac{m}{s}\\right]$', r'$v_z\\:\\left[\\frac{m}{s}\\right]$',\n        r'$w_x\\:\\left[\\frac{rad}{s}\\right]$', r'$w_y\\:\\left[\\frac{rad}{s}\\right]$', r'$w_z\\:\\left[\\frac{rad}{s}\\right]$'\n    ]\n    labels_u = [r'$u_1$', r'$u_2$', r'$u_3$', r'$u_4$']\n    labels = [labels_x[feat] for feat in x_vis_feats]\n    labels_ = [labels_u[feat] for feat in u_vis_feats]\n    labels = labels + labels_\n\n    if isinstance(dataset_name, str):\n        test_ds = read_dataset(dataset_name, True, quad_sim_options)\n        test_gp_ds = GPDataset(test_ds, x_features=x_vis_feats, u_features=u_vis_feats, y_dim=y_vis_feats,\n                               cap=x_cap, n_bins=hist_bins, thresh=hist_thresh, visualize_data=False)\n    else:\n        test_gp_ds = dataset_name\n\n    x_test = test_gp_ds.get_x(pruned=True, raw=True)\n    u_test = test_gp_ds.get_u(pruned=True, raw=True)\n    y_test = test_gp_ds.get_y(pruned=True, raw=False)\n    dt_test = test_gp_ds.get_dt(pruned=True)\n    x_pred = test_gp_ds.get_x_pred(pruned=True, raw=False)\n\n    if isinstance(y_vis_feats, list):\n        y_dims = [np.where(gp_ensemble.dim_idx == y_dim)[0][0] for y_dim in y_vis_feats]\n    else:\n        y_dims = np.where(gp_ensemble.dim_idx == y_vis_feats)[0]\n\n    # #### VISUALIZE GRID SAMPLING RESULTS IN TRAINING SET RANGE #### #\n    if grid_sampling_viz:\n        visualize_gp_inference(x_test, u_test, y_test, gp_ensemble, x_vis_feats, y_dims, labels)\n\n    # #### EVALUATE GP ON TEST SET #### #\n    print(\"Test set prediction...\")\n    outs = gp_ensemble.predict(x_test.T, u_test.T, return_std=True, progress_bar=True)\n    mean_estimate = np.atleast_2d(np.atleast_2d(outs[\"pred\"])[y_dims])\n    std_estimate = np.atleast_2d(np.atleast_2d(outs[\"cov_or_std\"])[y_dims])\n    mean_estimate = mean_estimate.T * dt_test[:, np.newaxis]\n    std_estimate = std_estimate.T * dt_test[:, np.newaxis]\n\n    # Undo dt normalization\n    y_test *= dt_test[:, np.newaxis]\n\n    # Error of nominal model\n    nominal_diff = y_test\n\n    # GP regresses model error, correct the predictions of the nominal model\n    augmented_diff = nominal_diff - mean_estimate\n    mean_estimate += x_pred\n\n    nominal_rmse = np.mean(np.abs(nominal_diff), 0)\n    augmented_rmse = np.mean(np.abs(augmented_diff), 0)\n\n    labels = [r'$v_x$ error', r'$v_y$ error', r'$v_z$ error']\n    t_vec = np.cumsum(dt_test)\n    plt.figure()\n    # mng = plt.get_current_fig_manager()\n    # mng.resize(*mng.window.maxsize())\n    for i in range(std_estimate.shape[1]):\n        plt.subplot(std_estimate.shape[1], 1, i+1)\n        plt.plot(t_vec, np.zeros(augmented_diff[:, i].shape), 'k')\n        plt.plot(t_vec, augmented_diff[:, i], 'b', label='augmented_err')\n        plt.plot(t_vec, augmented_diff[:, i] - 3 * std_estimate[:, i], ':c')\n        plt.plot(t_vec, augmented_diff[:, i] + 3 * std_estimate[:, i], ':c', label='3 std')\n        if nominal_diff is not None:\n            plt.plot(t_vec, nominal_diff[:, i], 'r', label='nominal_err')\n            plt.title('Mean dt: %.2f s. Nominal RMSE: %.5f [m/s].  Augmented rmse: %.5f [m/s]' % (\n                float(np.mean(dt_test)), nominal_rmse[i], augmented_rmse[i]))\n        else:\n            plt.title('Mean dt: %.2f s. Augmented RMSE: %.5f [m/s]' % (\n                float(np.mean(dt_test)), float(augmented_rmse[i])))\n\n        plt.ylabel(labels[i])\n        plt.legend()\n\n        if i == std_estimate.shape[1] - 1:\n            plt.xlabel('time (s)')\n\n    plt.show()\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--model_version\", type=str, default=\"\",\n                        help=\"Version to load for the regression models. By default it is an 8 digit git hash.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name of the regression model within the specified <model_version> folder.\")\n\n    input_arguments = parser.parse_args()\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    histogram_pruning_bins = Conf.histogram_bins\n    histogram_pruning_threshold = Conf.histogram_threshold\n    x_value_cap = Conf.velocity_cap\n\n    x_viz = Conf.x_viz\n    u_viz = Conf.u_viz\n    y_viz = Conf.y_viz\n    gp_load_model_name = input_arguments.model_name\n    gp_load_model_version = input_arguments.model_version\n    gp_visualization_experiment(simulation_options, ds_name,\n                                x_value_cap, histogram_pruning_bins, histogram_pruning_threshold,\n                                x_vis_feats=x_viz, u_vis_feats=u_viz, y_vis_feats=y_viz,\n                                grid_sampling_viz=True,\n                                load_model_version=gp_load_model_version,\n                                load_model_name=gp_load_model_name)"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/mlp_common.py",
    "content": "import numpy as np\nfrom torch.utils.data import Dataset\n\nimport ml_casadi.torch as mc\nfrom config.configuration_parameters import GroundEffectMapConfig\nfrom src.utils.ground_map import GroundMapWithBox\n\n\nclass GPToMLPDataset(Dataset):\n    def __init__(self, gp_dataset, ground_effect=False):\n        super().__init__()\n        self.x = gp_dataset.x\n        self.y = gp_dataset.y\n        if ground_effect:\n            self.x_raw = gp_dataset.x_raw[tuple(gp_dataset.pruned_idx)]\n            assert self.x.shape[0] == self.x_raw.shape[0]\n            map_conf = GroundEffectMapConfig\n            ground_map = GroundMapWithBox(np.array(map_conf.box_min),\n                                          np.array(map_conf.box_max),\n                                          map_conf.box_height,\n                                          horizon=map_conf.horizon,\n                                          resolution=map_conf.resolution)\n            self._map_res = map_conf.resolution\n            self._static_ground_map, self._org_to_map_org = ground_map.at(np.array(map_conf.origin))\n        else:\n            self._static_ground_map = None\n\n    def stats(self):\n        if self._static_ground_map is None:\n            return self.x.mean(axis=0), self.x.std(axis=0), self.y.mean(axis=0), self.y.std(axis=0)\n        else:\n            x_mean = np.hstack([self.x.mean(axis=0), np.zeros(9+4)])\n            x_std = np.hstack([self.x.std(axis=0), np.ones(9+4)])\n            return x_mean, x_std, self.y.mean(axis=0), self.y.std(axis=0)\n\n    def __len__(self):\n        return len(self.x)\n\n    def __getitem__(self, item):\n        if self._static_ground_map is None:\n            return self.x[item], self.y[item]\n        else:\n            x, y, z = self.x_raw[item][:3]\n            orientation = self.x_raw[item][3:7]\n            x_idxs = np.floor((x - self._org_to_map_org[0]) / self._map_res).astype(int) - 1\n            y_idxs = np.floor((y - self._org_to_map_org[1]) / self._map_res).astype(int) - 1\n            ground_patch = self._static_ground_map[x_idxs:x_idxs+3, y_idxs:y_idxs+3]\n\n            relative_ground_patch = 4 * (np.clip(z - ground_patch, 0, 0.5) - 0.25)\n\n            flatten_relative_ground_patch = relative_ground_patch.flatten(order='F')\n\n            ground_effect_in = np.hstack([flatten_relative_ground_patch, orientation*0])\n\n            return np.hstack([self.x[item], ground_effect_in]), self.y[item]\n\n\nclass NormalizedMLP(mc.TorchMLCasadiModule):\n    def __init__(self, model, x_mean, x_std, y_mean, y_std):\n        super().__init__()\n        self.model = model\n        self.input_size = self.model.input_size\n        self.output_size = self.model.output_size\n        self.register_buffer('x_mean', x_mean)\n        self.register_buffer('x_std', x_std)\n        self.register_buffer('y_mean', y_mean)\n        self.register_buffer('y_std', y_std)\n\n    def forward(self, x):\n        return (self.model((x - self.x_mean) / self.x_std) * self.y_std) + self.y_mean\n\n    def cs_forward(self, x):\n        return (self.model((x - self.x_mean.cpu().numpy()) / self.x_std.cpu().numpy()) * self.y_std.cpu().numpy()) + self.y_mean.cpu().numpy()\n\n\nclass QuadResidualModel(mc.TorchMLCasadiModule):\n    def __init__(self, hidden_size, hidden_layers):\n        super().__init__()\n        self._input_size = 10\n        self._output_size = 6\n        self.force_model = mc.nn.MultiLayerPerceptron(7, hidden_size, 3, hidden_layers, 'Tanh')\n        self.torque_model = mc.nn.MultiLayerPerceptron(10, hidden_size, 3, hidden_layers, 'Tanh')\n\n    def forward(self, x):\n        v = x[:, :3]\n        w = x[:, 3:6]\n        u = x[:, 6:]\n        force_in = mc.hcat([v, u])\n        torque_in = mc.hcat([v, w, u])\n        force_out = self.force_model(force_in)\n        torque_out = self.torque_model(torque_in)\n        out = mc.hcat([force_out, torque_out])\n        return out\n\n    def cs_forward(self, x):\n        v = x[:3]\n        w = x[3:6]\n        u = x[6:]\n        force_in = mc.vcat([v, u])\n        torque_in = mc.vcat([v, w, u])\n        force_out = self.force_model(force_in)\n        torque_out = self.torque_model(torque_in)\n        out = mc.vcat([force_out, torque_out])\n        return out\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/mlp_fitting.py",
    "content": "\"\"\" Executable script to train a custom Gaussian Process on recorded flight data.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport time\nimport argparse\nimport subprocess\nimport numpy as np\n\nimport torch\nfrom torch.utils.data import DataLoader\nimport ml_casadi.torch as mc\n\nfrom tqdm import tqdm\n\nfrom src.model_fitting.mlp_common import GPToMLPDataset, NormalizedMLP\nfrom src.utils.utils import safe_mkdir_recursive, load_pickled_models\nfrom src.utils.utils import get_model_dir_and_file\nfrom src.model_fitting.gp_common import GPDataset, read_dataset\nfrom config.configuration_parameters import ModelFitConfig as Conf\n\n\n\ndef main(x_features, u_features, reg_y_dims, model_ground_effect, quad_sim_options, dataset_name,\n         x_cap, hist_bins, hist_thresh,\n         model_name=\"model\", epochs=100, batch_size=64, hidden_layers=1, hidden_size=32, plot=False):\n\n    \"\"\"\n    Reads the dataset specified and trains a GP model or ensemble on it. The regressed variables is the time-derivative\n    of one of the states.\n    The feature and regressed variables are identified through the index number in the state vector. It is defined as:\n    [0: p_x, 1: p_y, 2:, p_z, 3: q_w, 4: q_x, 5: q_y, 6: q_z, 7: v_x, 8: v_y, 9: v_z, 10: w_x, 11: w_y, 12: w_z]\n    The input vector is also indexed:\n    [0: u_0, 1: u_1, 2: u_2, 3: u_3].\n\n    :param x_features: List of n regression feature indices from the quadrotor state indexing.\n    :type x_features: list\n    :param u_features: List of n' regression feature indices from the input state.\n    :type u_features: list\n    :param reg_y_dims: Indices of output dimension being regressed as the time-derivative.\n    :type reg_y_dims: List\n    :param dataset_name: Name of the dataset\n    :param quad_sim_options: Dictionary of metadata of the dataset to be read.\n    :param x_cap: cap value (in absolute value) for dataset pruning. Any input feature that exceeds this number in any\n    dimension will be removed.\n    :param hist_bins: Number of bins used for histogram pruning\n    :param hist_thresh: Any bin with less data percentage than this number will be removed.\n    :param model_name: Given name to the currently trained GP.\n    \"\"\"\n\n    # Get git commit hash for saving the model\n    git_version = ''\n    try:\n        git_version = subprocess.check_output(['git', 'describe', '--always']).strip().decode(\"utf-8\")\n    except subprocess.CalledProcessError as e:\n        print(e.returncode, e.output)\n    print(\"The model will be saved using hash: %s\" % git_version)\n\n    gp_name_dict = {\"git\": git_version, \"model_name\": model_name, \"params\": quad_sim_options}\n    save_file_path, save_file_name = get_model_dir_and_file(gp_name_dict)\n    safe_mkdir_recursive(save_file_path)\n\n    # #### DATASET LOADING #### #\n    if isinstance(dataset_name, str):\n        df_train = read_dataset(dataset_name, True, quad_sim_options)\n        gp_dataset_train = GPDataset(df_train, x_features, u_features, reg_y_dims,\n                                     cap=x_cap, n_bins=hist_bins, thresh=hist_thresh)\n        gp_dataset_val = None\n        try:\n            df_val = read_dataset(dataset_name, False, quad_sim_options)\n            gp_dataset_val = GPDataset(df_val, x_features, u_features, reg_y_dims,\n                                       cap=x_cap, n_bins=hist_bins, thresh=hist_thresh)\n        except:\n            print('Could not find test dataset.')\n    else:\n        raise TypeError(\"dataset_name must be a string.\")\n\n    dataset_train = GPToMLPDataset(gp_dataset_train, ground_effect=model_ground_effect)\n    x_mean, x_std, y_mean, y_std = dataset_train.stats()\n    data_loader = DataLoader(dataset_train, batch_size=batch_size, shuffle=True, num_workers=0)\n    input_dims = len(x_features) + len(u_features) + (9 + 4 if model_ground_effect else 0)\n    mlp_model = mc.nn.MultiLayerPerceptron(input_dims, hidden_size, len(reg_y_dims), hidden_layers, 'Tanh')\n    model = NormalizedMLP(mlp_model, torch.tensor(x_mean).float(), torch.tensor(x_std).float(),\n                          torch.tensor(y_mean).float(), torch.tensor(y_std).float())\n    print(f\"Train Dataset has {len(dataset_train)} points.\")\n\n    if gp_dataset_val:\n        dataset_val = GPToMLPDataset(gp_dataset_val, ground_effect=model_ground_effect)\n        data_loader_val = DataLoader(dataset_val, batch_size=10*batch_size, shuffle=False, num_workers=0)\n        print(f\"Val Dataset has {len(dataset_val)} points.\")\n\n    print(f\"Model has {sum(p.numel() for p in model.parameters() if p.requires_grad)} parameters.\")\n\n    if torch.cuda.is_available():\n        model = model.to('cuda:0')\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\n\n    loss_infos = []\n    p_bar = tqdm(range(epochs))\n    for i in p_bar:\n        model.train()\n        losses = []\n        for x, y in data_loader:\n            if torch.cuda.is_available():\n                x = x.to('cuda:0')\n                y = y.to('cuda:0')\n            x = x.float()\n            y = y.float()\n            optimizer.zero_grad()\n            y_pred = model(x)\n            loss = torch.square(y - y_pred).mean()\n            loss.backward()\n            optimizer.step()\n\n            losses.append(loss.item())\n\n        losses_val = []\n        if gp_dataset_val:\n            with torch.no_grad():\n                for x, y in data_loader_val:\n                    if torch.cuda.is_available():\n                        x = x.to('cuda:0')\n                        y = y.to('cuda:0')\n                    x = x.float()\n                    y = y.float()\n                    y_pred = model(x)\n                    loss = torch.square(y - y_pred)\n\n                    losses_val.append(loss.numpy())\n        train_loss_info = np.mean(losses)\n        loss_info = np.mean(np.vstack(losses_val)) if losses_val else np.mean(losses)\n        loss_infos.append(loss_info)\n        p_bar.set_description(f'Train Loss: {train_loss_info}, Val Loss {loss_info:.6f}')\n        p_bar.refresh()\n\n        save_dict = {\n            'state_dict': model.state_dict(),\n            'input_size': input_dims,\n            'hidden_size': hidden_size,\n            'output_size': len(reg_y_dims),\n            'hidden_layers': hidden_layers\n        }\n        torch.save(save_dict, os.path.join(save_file_path, f'{save_file_name}.pt'))\n\n    if plot:\n        import matplotlib.pyplot as plt\n        plt.plot(loss_infos)\n        plt.show()\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--epochs\", type=int, default=\"1000\",\n                        help=\"Number of epochs to train the model.\")\n\n    parser.add_argument(\"--batch_size\", type=int, default=\"64\",\n                        help=\"Batch Size.\")\n\n    parser.add_argument(\"--hidden_layers\", type=int, default=\"1\",\n                        help=\"Number of hidden layers.\")\n\n    parser.add_argument(\"--hidden_size\", type=int, default=\"32\",\n                        help=\"Size of hidden layers.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name assigned to the trained model.\")\n\n    parser.add_argument('--x', nargs='+', type=int, default=[7],\n                        help='Regression X variables. Must be a list of integers between 0 and 12. Velocities xyz '\n                             'correspond to indices 7, 8, 9.')\n\n    parser.add_argument('--u', action=\"store_true\",\n                        help='Use the control as input to the model.')\n\n    parser.add_argument(\"--y\", nargs='+', type=int, default=[7],\n                        help=\"Regression Y variable. Must be an integer between 0 and 12. Velocities xyz correspond to\"\n                             \"indices 7, 8, 9.\")\n\n    parser.add_argument('--ge', action=\"store_true\",\n                        help='Use the ground map as input to the model.')\n\n    parser.add_argument(\"--plot\", dest=\"plot\", action=\"store_true\",\n                        help=\"Plot the loss after training.\")\n    parser.set_defaults(plot=False)\n\n    input_arguments = parser.parse_args()\n\n    # Use vx, vy, vz as input features\n    x_feats = input_arguments.x\n    if input_arguments.u:\n        u_feats = [0, 1, 2, 3]\n    else:\n        u_feats = []\n\n    model_ground_effect = input_arguments.ge\n\n    # Regression dimension\n    y_regressed_dims = input_arguments.y\n    model_name = input_arguments.model_name\n\n    epochs = input_arguments.epochs\n    batch_size = input_arguments.batch_size\n    hidden_layers = input_arguments.hidden_layers\n    hidden_size = input_arguments.hidden_size\n    plot = input_arguments.plot\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    histogram_pruning_bins = Conf.histogram_bins\n    histogram_pruning_threshold = Conf.histogram_threshold\n    x_value_cap = Conf.velocity_cap\n\n    main(x_feats, u_feats, y_regressed_dims, model_ground_effect, simulation_options, ds_name,\n         x_value_cap, histogram_pruning_bins, histogram_pruning_threshold,\n         model_name=model_name, epochs=epochs, batch_size=batch_size, hidden_layers=hidden_layers,\n         hidden_size=hidden_size, plot=plot)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/mlp_quad_res_fitting.py",
    "content": "\"\"\" Executable script to train a custom Gaussian Process on recorded flight data.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport time\nimport argparse\nimport subprocess\nimport numpy as np\n\nimport torch\nfrom torch.utils.data import DataLoader\nimport ml_casadi.torch as mc\n\nfrom tqdm import tqdm\n\nfrom src.model_fitting.mlp_common import GPToMLPDataset, NormalizedMLP, QuadResidualModel\nfrom src.utils.utils import safe_mkdir_recursive, load_pickled_models\nfrom src.utils.utils import get_model_dir_and_file\nfrom src.model_fitting.gp_common import GPDataset, read_dataset\nfrom config.configuration_parameters import ModelFitConfig as Conf\n\n\n\ndef main(x_features, u_features, reg_y_dims, quad_sim_options, dataset_name,\n         x_cap, hist_bins, hist_thresh,\n         model_name=\"model\", epochs=100, batch_size=64, hidden_layers=1, hidden_size=32, plot=False):\n\n    \"\"\"\n    Reads the dataset specified and trains a GP model or ensemble on it. The regressed variables is the time-derivative\n    of one of the states.\n    The feature and regressed variables are identified through the index number in the state vector. It is defined as:\n    [0: p_x, 1: p_y, 2:, p_z, 3: q_w, 4: q_x, 5: q_y, 6: q_z, 7: v_x, 8: v_y, 9: v_z, 10: w_x, 11: w_y, 12: w_z]\n    The input vector is also indexed:\n    [0: u_0, 1: u_1, 2: u_2, 3: u_3].\n\n    :param x_features: List of n regression feature indices from the quadrotor state indexing.\n    :type x_features: list\n    :param u_features: List of n' regression feature indices from the input state.\n    :type u_features: list\n    :param reg_y_dims: Indices of output dimension being regressed as the time-derivative.\n    :type reg_y_dims: List\n    :param dataset_name: Name of the dataset\n    :param quad_sim_options: Dictionary of metadata of the dataset to be read.\n    :param x_cap: cap value (in absolute value) for dataset pruning. Any input feature that exceeds this number in any\n    dimension will be removed.\n    :param hist_bins: Number of bins used for histogram pruning\n    :param hist_thresh: Any bin with less data percentage than this number will be removed.\n    :param model_name: Given name to the currently trained GP.\n    \"\"\"\n\n    # Get git commit hash for saving the model\n    git_version = ''\n    try:\n        git_version = subprocess.check_output(['git', 'describe', '--always']).strip().decode(\"utf-8\")\n    except subprocess.CalledProcessError as e:\n        print(e.returncode, e.output)\n    print(\"The model will be saved using hash: %s\" % git_version)\n\n    gp_name_dict = {\"git\": git_version, \"model_name\": model_name, \"params\": quad_sim_options}\n    save_file_path, save_file_name = get_model_dir_and_file(gp_name_dict)\n    safe_mkdir_recursive(save_file_path)\n\n    # #### DATASET LOADING #### #\n    if isinstance(dataset_name, str):\n        df_train = read_dataset(dataset_name, True, quad_sim_options)\n        gp_dataset_train = GPDataset(df_train, x_features, u_features, reg_y_dims,\n                                     cap=x_cap, n_bins=hist_bins, thresh=hist_thresh)\n        gp_dataset_val = None\n        try:\n            df_val = read_dataset(dataset_name, False, quad_sim_options)\n            gp_dataset_val = GPDataset(df_val, x_features, u_features, reg_y_dims,\n                                       cap=x_cap, n_bins=hist_bins, thresh=hist_thresh)\n        except:\n            print('Could not find test dataset.')\n    else:\n        raise TypeError(\"dataset_name must be a string.\")\n\n    dataset_train = GPToMLPDataset(gp_dataset_train)\n    x_mean, x_std, y_mean, y_std = dataset_train.stats()\n    data_loader = DataLoader(dataset_train, batch_size=batch_size, shuffle=True, num_workers=0)\n    input_dims = len(x_features) + len(u_features)\n    mlp_model = QuadResidualModel(hidden_size, hidden_layers)\n    model = NormalizedMLP(mlp_model, torch.tensor(x_mean).float(), torch.tensor(x_std).float(),\n                          torch.tensor(y_mean).float(), torch.tensor(y_std).float())\n\n    print(f\"Train Dataset has {len(dataset_train)} points.\")\n\n    if gp_dataset_val:\n        dataset_val = GPToMLPDataset(gp_dataset_val)\n        data_loader_val = DataLoader(dataset_val, batch_size=10*batch_size, shuffle=False, num_workers=0)\n        print(f\"Val Dataset has {len(dataset_val)} points.\")\n\n    print(f\"Model has {sum(p.numel() for p in model.parameters() if p.requires_grad)} parameters.\")\n\n    if torch.cuda.is_available():\n        print('Pushing model to GPU.')\n        model = model.to('cuda:0')\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\n\n    loss_infos = []\n    p_bar = tqdm(range(epochs))\n    for i in p_bar:\n        model.train()\n        losses = []\n        for x, y in data_loader:\n            if torch.cuda.is_available():\n                x = x.to('cuda:0')\n                y = y.to('cuda:0')\n            x = x.float()\n            y = y.float()\n            optimizer.zero_grad()\n            y_pred = model(x)\n            loss = torch.square(y - y_pred).mean()\n            loss.backward()\n            optimizer.step()\n\n            losses.append(loss.item())\n\n        losses_val = []\n        if gp_dataset_val:\n            with torch.no_grad():\n                for x, y in data_loader_val:\n                    if torch.cuda.is_available():\n                        x = x.to('cuda:0')\n                        y = y.to('cuda:0')\n                    x = x.float()\n                    y = y.float()\n                    y_pred = model(x)\n                    loss = torch.square(y - y_pred)\n\n                    losses_val.append(loss.cpu().numpy())\n        train_loss_info = np.mean(losses)\n        loss_info = np.mean(np.vstack(losses_val)) if losses_val else np.mean(losses)\n        loss_infos.append(loss_info)\n        p_bar.set_description(f'Train Loss: {train_loss_info}, Val Loss {loss_info:.6f}')\n        p_bar.refresh()\n\n        save_dict = {\n            'state_dict': model.state_dict(),\n            'input_size': input_dims,\n            'hidden_size': hidden_size,\n            'output_size': len(reg_y_dims),\n            'hidden_layers': hidden_layers\n        }\n        torch.save(save_dict, os.path.join(save_file_path, f'{save_file_name}_{i}.pt'))\n        print(f'{i}: Val Loss {loss_info:.6f}')\n\n    if plot:\n        import matplotlib.pyplot as plt\n        plt.plot(loss_infos)\n        plt.show()\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--epochs\", type=int, default=\"1000\",\n                        help=\"Number of epochs to train the model.\")\n\n    parser.add_argument(\"--batch_size\", type=int, default=\"64\",\n                        help=\"Batch Size.\")\n\n    parser.add_argument(\"--hidden_layers\", type=int, default=\"1\",\n                        help=\"Number of hidden layers.\")\n\n    parser.add_argument(\"--hidden_size\", type=int, default=\"32\",\n                        help=\"Size of hidden layers.\")\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name assigned to the trained model.\")\n\n    parser.add_argument(\"--plot\", dest=\"plot\", action=\"store_true\",\n                        help=\"Plot the loss after training.\")\n    parser.set_defaults(plot=False)\n\n    input_arguments = parser.parse_args()\n\n    # Use vx, vy, vz as input features\n    x_feats = [7, 8, 9, 10, 11, 12]\n    u_feats = [0, 1, 2, 3]\n\n    # Regression dimension\n    y_regressed_dims = [7, 8, 9, 10, 11, 12]\n    model_name = input_arguments.model_name\n\n    epochs = input_arguments.epochs\n    batch_size = input_arguments.batch_size\n    hidden_layers = input_arguments.hidden_layers\n    hidden_size = input_arguments.hidden_size\n    plot = input_arguments.plot\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    histogram_pruning_bins = Conf.histogram_bins\n    histogram_pruning_threshold = Conf.histogram_threshold\n    x_value_cap = Conf.velocity_cap\n\n    main(x_feats, u_feats, y_regressed_dims, simulation_options, ds_name,\n         x_value_cap, histogram_pruning_bins, histogram_pruning_threshold,\n         model_name=model_name, epochs=epochs, batch_size=batch_size, hidden_layers=hidden_layers,\n         hidden_size=hidden_size, plot=plot)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/process_neurobem_dataset.py",
    "content": "import argparse\nimport os\nimport random\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.signal import savgol_filter\nfrom tqdm import tqdm\n\nfrom config.configuration_parameters import DirectoryConfig\nfrom config.configuration_parameters import ModelFitConfig as Conf\nfrom src.experiments.point_tracking_and_record import make_record_dict, jsonify\nfrom src.quad_mpc.create_ros_dd_mpc import custom_quad_param_loader\nfrom src.quad_mpc.quad_3d_mpc import Quad3DMPC\nfrom src.utils.utils import safe_mkdir_recursive, v_dot_q\n\nval_files = ['merged_2021-02-03-16-58-13_seg_2.csv']\n\ndef main(quad):\n    full_path = os.path.join(DirectoryConfig.DATA_DIR, 'neurobem_processed_data_cmd')\n    files = os.listdir(full_path)\n    random.shuffle(files)\n\n    rec_dicts = []\n    for file in tqdm(files):\n        if file in val_files:\n            print('Skipping Validation File')\n            continue\n        try:\n            rec_dict = make_record_dict(state_dim=13)\n            process_file(os.path.join(full_path, file), quad, rec_dict)\n            rec_dicts.append(rec_dict)\n        except Exception as e:\n            print(e)\n\n    rec_dict = {}\n    rec_dict['state_in'] = np.concatenate([data_dict['state_in'] for data_dict in rec_dicts], axis=0)\n    rec_dict['input_in'] = np.concatenate([data_dict['input_in'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_out'] = np.concatenate([data_dict['state_out'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_ref'] = np.concatenate([data_dict['state_ref'] for data_dict in rec_dicts], axis=0)\n    rec_dict['timestamp'] = np.concatenate([data_dict['timestamp'] for data_dict in rec_dicts], axis=0)\n    rec_dict['dt'] = np.concatenate([data_dict['dt'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_pred'] = np.concatenate([data_dict['state_pred'] for data_dict in rec_dicts], axis=0)\n    rec_dict['error'] = np.concatenate([data_dict['error'] for data_dict in rec_dicts], axis=0)\n\n    del rec_dicts\n\n    # Save datasets\n    save_file_folder = os.path.join(DirectoryConfig.DATA_DIR, 'neurobem_dataset', 'train')\n    safe_mkdir_recursive(save_file_folder)\n    save_file = os.path.join(save_file_folder, f'npz_dataset_001.npz')\n    np.savez(save_file, **rec_dict)\n\n\n    # # Validation\n    # rec_dicts = []\n    # for file in tqdm(files[-20:]):\n    #     try:\n    #         rec_dict = make_record_dict(state_dim=13)\n    #         process_file(os.path.join(full_path, file), quad, rec_dict)\n    #         rec_dicts.append(rec_dict)\n    #     except Exception as e:\n    #         print(e)\n    #\n    # rec_dict = {}\n    # rec_dict['state_in'] = np.concatenate([data_dict['state_in'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['input_in'] = np.concatenate([data_dict['input_in'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['state_out'] = np.concatenate([data_dict['state_out'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['state_ref'] = np.concatenate([data_dict['state_ref'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['timestamp'] = np.concatenate([data_dict['timestamp'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['dt'] = np.concatenate([data_dict['dt'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['state_pred'] = np.concatenate([data_dict['state_pred'] for data_dict in rec_dicts], axis=0)\n    # rec_dict['error'] = np.concatenate([data_dict['error'] for data_dict in rec_dicts], axis=0)\n    #\n    # del rec_dicts\n    #\n    # # Save datasets\n    # save_file_folder = os.path.join(DirectoryConfig.DATA_DIR, 'neurobem_dataset', 'test')\n    # safe_mkdir_recursive(save_file_folder)\n    # save_file = os.path.join(save_file_folder, f'npz_dataset_001.npz')\n    # np.savez(save_file, **rec_dict)\n\ndef quaternion_to_euler(q):\n    yaw = np.arctan2(2 * (q[:, 0] * q[:, 3] - q[:, 1] * q[:, 2]),\n        1 - 2 * (q[:, 2] ** 2 + q[:, 3] ** 2))\n    pitch = np.arcsin(2 * (q[:, 0] * q[:, 2] + q[:, 3] * q[:, 1]))\n    roll = np.arctan2(2 * (q[:, 0] * q[:, 1] - q[:, 2] * q[:, 3]),\n        1 - 2 * (q[:, 1] ** 2 + q[:, 2] ** 2))\n\n    return np.stack([yaw, pitch, roll], axis=-1)\n\ndef val(quad):\n    full_path = os.path.join(DirectoryConfig.DATA_DIR, 'neurobem_processed_data_cmd')\n    #val_files = ['merged_2021-02-23-22-54-17_seg_1.csv', 'merged_2021-02-23-17-35-26_seg_1.csv'] # 'merged_2021-02-18-18-09-47_seg_1.csv',\n\n    rec_dicts = []\n    for file in tqdm(val_files):\n        try:\n            rec_dict = make_record_dict(state_dim=13)\n            process_file(os.path.join(full_path, file), quad, rec_dict)\n            rec_dicts.append(rec_dict)\n        except Exception as e:\n            print(e)\n\n    rec_dict = {}\n    rec_dict['state_in'] = np.concatenate([data_dict['state_in'] for data_dict in rec_dicts], axis=0)\n    rec_dict['input_in'] = np.concatenate([data_dict['input_in'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_out'] = np.concatenate([data_dict['state_out'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_ref'] = np.concatenate([data_dict['state_ref'] for data_dict in rec_dicts], axis=0)\n    rec_dict['timestamp'] = np.concatenate([data_dict['timestamp'] for data_dict in rec_dicts], axis=0)\n    rec_dict['dt'] = np.concatenate([data_dict['dt'] for data_dict in rec_dicts], axis=0)\n    rec_dict['state_pred'] = np.concatenate([data_dict['state_pred'] for data_dict in rec_dicts], axis=0)\n    rec_dict['error'] = np.concatenate([data_dict['error'] for data_dict in rec_dicts], axis=0)\n\n    del rec_dicts\n\n    # Save datasets\n    save_file_folder = os.path.join(DirectoryConfig.DATA_DIR, 'neurobem_dataset', 'test')\n    safe_mkdir_recursive(save_file_folder)\n    save_file = os.path.join(save_file_folder, f'npz_dataset_001.npz')\n    np.savez(save_file, **rec_dict)\n\ndef process_file(file_path, quad, rec_dict):\n    data = pd.read_csv(file_path, encoding='latin-1')\n    for x_0, x_f, u, dt in consecutive_data_points(data):\n        resimulate(x_0, x_f, u, dt, quad, rec_dict)\n\n\ndef consecutive_data_points(data):\n    for i in range(25, len(data)-1, 1):\n        data_0 = data.iloc[i]\n        data_1 = data.iloc[i+1]\n        data_c = data.iloc[i]  # Communication Delay\n        t0 = data_0['t']\n        t1 = data_1['t']\n        dt = t1 - t0\n\n        x_0 = np.hstack([\n            data_0['pos x'],\n            data_0['pos y'],\n            data_0['pos z'],\n            data_0['quat w'],\n            data_0['quat x'],\n            data_0['quat y'],\n            data_0['quat z'],\n            data_0['vel x'],\n            data_0['vel y'],\n            data_0['vel z'],\n            data_0['ang vel x'],\n            data_0['ang vel y'],\n            data_0['ang vel z'],\n        ])\n\n        x_1 = np.hstack([\n            data_1['pos x'],\n            data_1['pos y'],\n            data_1['pos z'],\n            data_1['quat w'],\n            data_1['quat x'],\n            data_1['quat y'],\n            data_1['quat z'],\n            data_1['vel x'],\n            data_1['vel y'],\n            data_1['vel z'],\n            data_1['ang vel x'],\n            data_1['ang vel y'],\n            data_1['ang vel z'],\n        ])\n\n        # Velocity to world coordinate frame\n        x_0[7:10] = v_dot_q(x_0[7:10], x_0[3:7])\n        x_1[7:10] = v_dot_q(x_1[7:10], x_1[3:7])\n\n        # Solve single rotor thrusts\n        Jm1 = 1 / quad.J\n\n        a1 = Jm1[0] * quad.y_f\n        b1 = data_0['ang acc x'] - Jm1[0] * (quad.J[1] - quad.J[2]) * x_0[11] * x_0[12]\n        a2 = -Jm1[1] * quad.x_f\n        b2 = data_0['ang acc y'] - Jm1[1] * (quad.J[2] - quad.J[0]) * x_0[12] * x_0[10]\n        a3 = Jm1[2] * quad.z_l_tau\n        b3 = data_0['ang acc z'] - Jm1[2] * (quad.J[0] - quad.J[1]) * x_0[10] * x_0[11]\n        a4 = np.ones(4,)\n        b4 = data_c['cmd_thrust'] * quad_mpc.quad.mass\n\n        u = np.linalg.solve([a1, a2, a3, a4], [b1, b2, b3, b4]) / quad.max_thrust\n\n        # ypr_0 = quaternion_to_euler(x_0[np.newaxis, 3:7])[0][::-1]\n        # x_0[12] = 1.0 * x_0[10] + np.sin(ypr_0[0]) * np.tan(ypr_0[1]) * x_0[11] + np.cos(ypr_0[0]) * np.tan(ypr_0[1]) * x_0[12]\n        # x_0[11] = np.cos(ypr_0[0]) * x_0[11] - np.sin(ypr_0[0]) * x_0[12]\n        # x_0[10] = np.sin(ypr_0[0]) / np.cos(ypr_0[1]) * x_0[11] + np.cos(ypr_0[0]) / np.cos(ypr_0[1]) * x_0[12]\n        #\n        # ypr_1 = quaternion_to_euler(x_1[np.newaxis, 3:7])[0][::-1]\n        # x_1[10] = 1.0 * x_1[10] + np.sin(ypr_1[0]) * np.tan(ypr_1[1]) * x_1[11] + np.cos(ypr_1[0]) * np.tan(ypr_1[1]) * \\\n        #           x_1[12]\n        # x_1[11] = np.cos(ypr_1[0]) * x_1[11] - np.sin(ypr_1[0]) * x_1[12]\n        # x_1[12] = np.sin(ypr_1[0]) / np.cos(ypr_1[1]) * x_1[11] + np.cos(ypr_1[0]) / np.cos(ypr_1[1]) * x_1[12]\n\n        # ypr_0 = quaternion_to_euler(x_0[np.newaxis, 3:7])[0][::-1]\n        # x_0[10] = 1.0 * x_0[10] - np.sin(ypr_0[1]) * x_0[12]\n        # x_0[11] = np.cos(ypr_0[0]) * x_0[11] + np.sin(ypr_0[0]) * np.cos(ypr_0[1]) * x_0[12]\n        # x_0[12] = -np.sin(ypr_0[0]) * x_0[11] + np.cos(ypr_0[0]) * np.cos(ypr_0[1]) * x_0[12]\n        #\n        # ypr_1 = quaternion_to_euler(x_1[np.newaxis, 3:7])[0][::-1]\n        # x_1[10] = 1.0 * x_1[10] - np.sin(ypr_1[1]) * x_1[12]\n        # x_1[11] = np.cos(ypr_1[0]) * x_1[11] + np.sin(ypr_1[0]) * np.cos(ypr_1[1]) * x_1[12]\n        # x_1[12] = -np.sin(ypr_1[0]) * x_1[11] + np.cos(ypr_1[0]) * np.cos(ypr_1[1]) * x_1[12]\n\n        # u = np.hstack([\n        #     data_0['mot 2'],\n        #     data_0['mot 3'],\n        #     data_0['mot 1'],\n        #     data_0['mot 4'],\n        # ])\n        #\n        # u = u ** 2 * quad.thrust_map[0] / quad.max_thrust\n\n        yield x_0, x_1, u, dt\n\ndef f_rate(x, u, quad):\n    \"\"\"\n    Time-derivative of the angular rate\n    :param x: 4-length array of input state with components: 3D pos, quaternion angle, 3D vel, 3D rate\n    :param u: control input vector (4-dimensional): [trust_motor_1, ..., thrust_motor_4]\n    :param t_d: disturbance torque (3D)\n    :return: angular rate differential increment (scalar): dr/dt\n    \"\"\"\n\n    rate = x[10:]\n    return np.array([\n        1 / quad.J[0] * (u.dot(quad.y_f) + (quad.J[1] - quad.J[2]) * rate[1] * rate[2]),\n        1 / quad.J[1] * (-u.dot(quad.x_f) + (quad.J[2] - quad.J[0]) * rate[2] * rate[0]),\n        1 / quad.J[2] * (u.dot(quad.z_l_tau) + (quad.J[0] - quad.J[1]) * rate[0] * rate[1])\n    ]).squeeze()\n\ndef resimulate(x_0, x_f, u, dt, quad_mpc, rec_dict):\n    x_pred, _ = quad_mpc.forward_prop(x_0, u, t_horizon=dt, use_gp=False)\n    x_pred = x_pred[-1, np.newaxis, :]\n\n    rec_dict['state_in'] = np.append(rec_dict['state_in'], x_0[np.newaxis, :], axis=0)\n    rec_dict['input_in'] = np.append(rec_dict['input_in'], u[np.newaxis, :], axis=0)\n    rec_dict['state_out'] = np.append(rec_dict['state_out'], x_f[np.newaxis, :], axis=0)\n    rec_dict['state_ref'] = np.append(rec_dict['state_ref'], np.zeros_like(x_f[np.newaxis, :]), axis=0)\n    rec_dict['timestamp'] = np.append(rec_dict['timestamp'], np.zeros_like(dt))\n    rec_dict['dt'] = np.append(rec_dict['dt'], dt)\n    rec_dict['state_pred'] = np.append(rec_dict['state_pred'], x_pred, axis=0)\n    rec_dict['error'] = np.append(rec_dict['error'], x_f - x_pred, axis=0)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--quad\", type=str, default=\"kingfisher\",\n                        help=\"Name of the quad.\")\n\n    input_arguments = parser.parse_args()\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    quad = custom_quad_param_loader(input_arguments.quad)\n    quad_mpc = Quad3DMPC(quad)\n\n    main(quad_mpc)\n    val(quad_mpc)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/rdrv_fitting.py",
    "content": "\"\"\" Implementation and fitting of the RDRv linear regression model on flight data.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport joblib\nimport argparse\nimport subprocess\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom src.model_fitting.gp_common import GPDataset, read_dataset\nfrom src.utils.utils import v_dot_q, get_model_dir_and_file, safe_mknode_recursive\nfrom config.configuration_parameters import ModelFitConfig as Conf\n\n\ndef linear_rdrv_fitting(x, y, feat_idx):\n    drag_coeffs = np.zeros((3, 3))\n    for i in range(x.shape[1]):\n        reg = LinearRegression(fit_intercept=False).fit(x[:, i, np.newaxis], y[:, i])\n        drag_coeffs[feat_idx[i], feat_idx[i]] = reg.coef_\n\n    return drag_coeffs\n\n\ndef load_rdrv(model_options):\n    directory, file_name = get_model_dir_and_file(model_options)\n    rdrv_d = joblib.load(os.path.join(directory, file_name + '.pkl'))\n    return rdrv_d\n\n\ndef main(model_name, features, quad_sim_options, dataset_name,\n         x_cap, hist_bins, hist_thresh, plot=False):\n\n    df_train = read_dataset(dataset_name, True, quad_sim_options)\n    gp_dataset = GPDataset(df_train, features, [], features,\n                           cap=x_cap, n_bins=hist_bins, thresh=hist_thresh, visualize_data=False)\n\n    # Get X,Y datasets for the specified regression dimensions (subset of [7, 8, 9])\n    a_err_b = gp_dataset.y\n    v_b = gp_dataset.x\n\n    drag_coeffs = linear_rdrv_fitting(v_b, a_err_b, np.array(features) - 7)\n\n    # Get git commit hash for saving the model\n    git_version = ''\n    try:\n        git_version = subprocess.check_output(['git', 'describe', '--always']).strip().decode(\"utf-8\")\n    except subprocess.CalledProcessError as e:\n        print(e.returncode, e.output)\n    gp_name_dict = {\"git\": git_version, \"model_name\": model_name, \"params\": quad_sim_options}\n    save_file_path, save_file_name = get_model_dir_and_file(gp_name_dict)\n    safe_mknode_recursive(save_file_path, save_file_name + '.pkl', overwrite=True)\n    file_name = os.path.join(save_file_path, save_file_name + '.pkl')\n    joblib.dump(drag_coeffs, file_name)\n\n    if not plot:\n        return drag_coeffs\n\n    # Get X,Y datasets for velocity dimensions [7, 8, 9]\n    a_err_b = gp_dataset.get_y(pruned=True, raw=True)[:, 7:10]\n    v_b = gp_dataset.get_x(pruned=True, raw=True)[:, 7:10]\n\n    # Compute predictions with RDRv model\n    preds = []\n    for i in range(len(a_err_b)):\n        preds.append(np.matmul(drag_coeffs, v_b[i]))\n    preds = np.stack(preds)\n\n    ax_names = [r'$v_B^x$', r'$v_B^y$', r'$v_B^z$']\n    y_dim_name = [r'drag $a_B^x$', 'drag $a_B^y$', 'drag $a_B^z$']\n    fig, ax = plt.subplots(len(features), 1, sharex='all')\n    for i in range(len(features)):\n        ax[i].scatter(v_b[:, i], a_err_b[:, i], label='data')\n        ax[i].scatter(v_b[:, i], preds[:, i], label='RDRv')\n        ax[i].legend()\n        ax[i].set_ylabel(y_dim_name[features[i] - 7])\n        ax[i].set_xlabel(ax_names[features[i] - 7])\n    ax[0].set_title('Body reference frame')\n\n    # Remap error to world coordinates and predictions too\n    q = gp_dataset.get_x(pruned=True, raw=True)[:, 3:7]\n    for i in range(len(a_err_b)):\n        a_err_b[i] = v_dot_q(a_err_b[i], q[i])\n        preds[i] = v_dot_q(preds[i], q[i])\n\n    ax_names = [r'$v_W^x$', r'$v_W^y$', r'$v_W^z$']\n    y_dim_name = [r'drag $a_W^x$', 'drag $a_W^y$', 'drag $a_W^z$']\n    fig, ax = plt.subplots(len(features), 1, sharex='all')\n    for i in range(len(features)):\n        ax[i].scatter(v_b[:, i], a_err_b[:, i], label='data')\n        ax[i].scatter(v_b[:, i], preds[:, i], label='RDRv')\n        ax[i].legend()\n        ax[i].set_ylabel(y_dim_name[features[i] - 7])\n        ax[i].set_xlabel(ax_names[features[i] - 7])\n    ax[0].set_title('World reference frame')\n    plt.show()\n\n    return drag_coeffs\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--model_name\", type=str, default=\"\",\n                        help=\"Name assigned to the trained model.\")\n\n    parser.add_argument('--x', nargs='+', type=int, default=[7, 8, 9],\n                        help='Regression X and Y variables. Must be a list of integers between 0 and 12.'\n                             'Velocities xyz correspond to indices 7, 8, 9.')\n\n    input_arguments = parser.parse_args()\n\n    reg_dimensions = input_arguments.x\n    rdrv_name = input_arguments.model_name\n\n    histogram_bins = Conf.histogram_bins\n    histogram_threshold = Conf.histogram_threshold\n    histogram_cap = Conf.velocity_cap\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    main(model_name=rdrv_name, features=reg_dimensions, quad_sim_options=simulation_options, dataset_name=ds_name,\n         x_cap=histogram_cap, hist_bins=histogram_bins, hist_thresh=histogram_threshold,\n         plot=Conf.visualize_training_result)\n"
  },
  {
    "path": "ros_dd_mpc/src/model_fitting/system_identification.py",
    "content": "import argparse\nimport os\n\nimport numpy as np\nimport pandas as pd\nimport rosbag\nimport rospy\n\nfrom config.configuration_parameters import ModelFitConfig as Conf\nfrom src.experiments.point_tracking_and_record import make_record_dict\nfrom src.quad_mpc.create_ros_dd_mpc import custom_quad_param_loader\nfrom src.quad_mpc.quad_3d_mpc import Quad3DMPC\nfrom src.utils.utils import jsonify, get_data_dir_and_file\n\n\ndef odometry_parse(odom_msg):\n    p = [odom_msg.pose.pose.position.x, odom_msg.pose.pose.position.y, odom_msg.pose.pose.position.z]\n    q = [odom_msg.pose.pose.orientation.w, odom_msg.pose.pose.orientation.x, odom_msg.pose.pose.orientation.y,\n         odom_msg.pose.pose.orientation.z]\n    v = [odom_msg.twist.twist.linear.x, odom_msg.twist.twist.linear.y, odom_msg.twist.twist.linear.z]\n    w = [odom_msg.twist.twist.angular.x, odom_msg.twist.twist.angular.y, odom_msg.twist.twist.angular.z]\n\n    return np.array(p + q + v + w)\n\n\ndef thrust_motor_model(motor_tau, thrust, thrust_des, dt):\n    if motor_tau < 1e-12:\n        return thrust_des\n    tau_inv = 1 / motor_tau\n    thrust_out = (\n            tau_inv ** 2 * (thrust_des - 2 * (thrust * thrust_des) ** 0.5 + thrust) * dt ** 2\n            + 2 * tau_inv * ((thrust * thrust_des) ** 0.5 - thrust) * dt\n            + thrust\n    )\n    return thrust_out\n\n\ndef system_identification(quad_mpc, ds_name, sim_options):\n    \"\"\"\n    Processes a rosbag with recorded odometry and send commands to do a system identification of the residual error.\n    :param quad_mpc: Quad model used for forward simulation\n    :param ds_name: Dataset name\n    :param sim_options: Simulation options.\n    :return:\n    \"\"\"\n    # Open bag\n    data_file = get_data_dir_and_file(ds_name, training_split=True, params=sim_options, read_only=True)\n    if data_file is None:\n        raise FileNotFoundError\n    rec_file_dir, rec_file_name = data_file\n    rec_file_name_bag = rec_file_name.replace('.csv', '.bag')\n    rec_file = os.path.join(rec_file_dir, rec_file_name_bag)\n    save_file = os.path.join(rec_file_dir, rec_file_name)\n    bag = rosbag.Bag(rec_file)\n\n    # Re-simulate thrust with motor model and communication delay\n    thrust = None\n    control = {'t': [], 'thrust': []}\n    for topic, msg, t in bag.read_messages(topics=['control']):\n        t = msg.header.stamp\n        desired_thrust = np.array(msg.thrusts)\n        if thrust is None:\n            thrust = desired_thrust\n        t = t + rospy.Duration.from_sec(quad_mpc.quad.comm_delay) + rospy.Duration.from_sec(0.001)\n        for dt in np.arange(0.001, 0.021, step=0.001):\n            thrust = thrust_motor_model(quad_mpc.quad.motor_tau, thrust, desired_thrust, 0.001)\n            t_at = t + rospy.Duration.from_sec(dt)\n            control['t'].append(t_at.to_sec())\n            control['thrust'].append(thrust)\n\n    control = pd.DataFrame(control)\n    control = control.set_index('t')\n    control = control[~control.index.duplicated(keep='last')]\n    control = control.sort_index()\n\n    state = {'t': [], 'state': []}\n    states_list = []\n    recording = False\n    for topic, msg, t in bag.read_messages(topics=['recording_ctrl', 'state']):\n        if topic == 'recording_ctrl':\n            recording = msg.data\n        if topic == 'state' and recording:\n            t = msg.header.stamp\n            state['t'].append(t.to_sec())\n            x = odometry_parse(msg)\n            state['state'].append(x)\n            states_list.append(np.array(x))\n\n    states_np = np.array(states_list)\n\n    filtered_states_np = states_np.copy()\n    from scipy.signal import savgol_filter\n    window_size_xy = 31\n    window_size_z = 31\n    window_size_q = 31\n    window_size_v = 31\n    window_size_w = 121\n    poly_order = 4\n    filtered_states_np[:, :2] = savgol_filter(filtered_states_np[:, :2], window_size_xy, poly_order, axis=0)\n    filtered_states_np[:, 2] = savgol_filter(filtered_states_np[:, 2], window_size_z, poly_order, axis=0)\n    filtered_states_np[:, 3:7] = savgol_filter(filtered_states_np[:, 3:7], window_size_q, 2, axis=0)\n    filtered_states_np[:, 3:7] = filtered_states_np[:, 3:7] / np.linalg.norm(filtered_states_np[:, 3:7], axis=1, keepdims=True)\n    filtered_states_np[:, 7:10] = savgol_filter(filtered_states_np[:, 7:10], window_size_v, 2, axis=0)\n    filtered_states_np[:, 10:] = savgol_filter(filtered_states_np[:, 10:], window_size_w, poly_order, axis=0)\n\n    # def extract_windows(array, sub_window_size):\n    #     examples = []\n    #\n    #     for i in range(0, array.shape[0]-sub_window_size, sub_window_size):\n    #         example = array[i:sub_window_size + i]\n    #         examples.append(np.expand_dims(example, 0))\n    #\n    #     return np.vstack(examples)\n    #\n    # from scipy.spatial.transform import Rotation as R\n    # from scipy.spatial.transform import RotationSpline\n    # window_rot = extract_windows(filtered_states_np[:, 3:7], 20)\n    # mean_rot = []\n    # for i in range(window_rot.shape[0]):\n    #     mean_rot.append(R.from_quat(window_rot[i]).mean().as_quat())\n    # mean_rot = np.vstack(mean_rot)\n    # rotations = R.from_quat(mean_rot)\n    # spline = RotationSpline(state['t'][9::20][:mean_rot.shape[0]], rotations)\n    # filtered_states_np[:, 3:7] = spline(state['t']).as_quat()\n\n    import matplotlib.pyplot as plt\n    # plt.plot(states_np[:, 9])\n    #plt.plot(filtered_states_np[3975:4025, 0])\n    #plt.plot(states_np[3975:4025, 0])\n    #plt.plot(np.gradient(filtered_states_np[3975:4025, 1])/1e-2)\n    #plt.plot(filtered_states_np[3975:4025, 7])\n    #plt.plot(filtered_states_np[1800:2000, 3])\n    #plt.plot(states_np[1800:2000, 3])\n    #plt.plot(filtered_states_np[3775:4025, 3:7] * np.sign(filtered_states_np[3775:4025, 3]))\n    #plt.plot(filtered_states_np[3775:4025, 10])\n    #plt.plot(states_np[3775:4025, 10])\n    #plt.show()\n\n    state = {'t': state['t'], 'state': []}\n    for i in range(filtered_states_np.shape[0]):\n        state['state'].append(filtered_states_np[i])\n\n    rec_dict = make_record_dict(state_dim=13)\n    for i in range(1, len(state['t'])):\n        last_state_idx = control.index.get_loc(state['t'][i-1], method='nearest')\n        curr_state_idx = control.index.get_loc(state['t'][i], method='nearest')\n        u0 = control['thrust'].iloc[last_state_idx] / quad_mpc.quad.max_thrust\n        u1 = control['thrust'].iloc[curr_state_idx] / quad_mpc.quad.max_thrust\n\n        # Only use subsequent states with similar thrust state for system identification (within 1%)\n        if np.all(np.abs(u0 - u1) < 0.01):\n            x_0 = state['state'][i-1]\n            x_f = state['state'][i]\n            u = np.vstack([u0, u1]).mean(0)\n            dt = state['t'][i] - state['t'][i-1]\n            if dt > 0.015:\n                continue\n            x_pred, _ = quad_mpc.forward_prop(x_0, u, t_horizon=dt, use_gp=False)\n            x_pred = x_pred[-1, np.newaxis, :]\n\n            rec_dict['state_in'] = np.append(rec_dict['state_in'], x_0[np.newaxis, :], axis=0)\n            rec_dict['input_in'] = np.append(rec_dict['input_in'], u[np.newaxis, :], axis=0)\n            rec_dict['state_out'] = np.append(rec_dict['state_out'], x_f[np.newaxis, :], axis=0)\n            rec_dict['state_ref'] = np.append(rec_dict['state_ref'], np.zeros_like(x_f[np.newaxis, :]), axis=0)\n            rec_dict['timestamp'] = np.append(rec_dict['timestamp'], state['t'][i])\n            rec_dict['dt'] = np.append(rec_dict['dt'], dt)\n            rec_dict['state_pred'] = np.append(rec_dict['state_pred'], x_pred, axis=0)\n            rec_dict['error'] = np.append(rec_dict['error'], x_f - x_pred, axis=0)\n\n    # Save datasets\n    for key in rec_dict.keys():\n        print(key, \" \", rec_dict[key].shape)\n        rec_dict[key] = jsonify(rec_dict[key])\n    df = pd.DataFrame(rec_dict)\n    df.to_csv(save_file, index=True, header=True)\n    return\n    import matplotlib.pyplot as plt\n    #plt.plot(states_np[:, 9])\n    plt.plot(np.gradient(out[:, 8]))\n    plt.show()\n    return\n\n    # Match each state to the corresponding thrust state of the quad\n    rec_dict = make_record_dict(state_dim=13)\n    recording = False\n    last_state_msg = None\n    for topic, msg, t in bag.read_messages(topics=['recording_ctrl', 'state']):\n        if topic == 'recording_ctrl':\n            recording = msg.data\n            if not recording:\n                last_state_msg = None\n        if topic == 'state' and recording:\n            if last_state_msg is not None:\n                last_state_idx = control.index.get_loc(last_state_msg.header.stamp.to_sec(), method='nearest')\n                curr_state_idx = control.index.get_loc(msg.header.stamp.to_sec(), method='nearest')\n                u0 = control['thrust'].iloc[last_state_idx] / quad_mpc.quad.max_thrust\n                u1 = control['thrust'].iloc[curr_state_idx] / quad_mpc.quad.max_thrust\n\n                # Only use subsequent states with similar thrust state for system identification (within 1%)\n                if np.all(np.abs(u0 - u1) < 0.01):\n                    x_0 = odometry_parse(last_state_msg)\n                    x_f = odometry_parse(msg)\n                    u = np.vstack([u0, u1]).mean(0)\n                    dt = msg.header.stamp.to_sec() - last_state_msg.header.stamp.to_sec()\n                    x_pred, _ = quad_mpc.forward_prop(x_0, u, t_horizon=dt, use_gp=False)\n                    x_pred = x_pred[-1, np.newaxis, :]\n\n                    rec_dict['state_in'] = np.append(rec_dict['state_in'], x_0[np.newaxis, :], axis=0)\n                    rec_dict['input_in'] = np.append(rec_dict['input_in'], u[np.newaxis, :], axis=0)\n                    rec_dict['state_out'] = np.append(rec_dict['state_out'], x_f[np.newaxis, :], axis=0)\n                    rec_dict['state_ref'] = np.append(rec_dict['state_ref'], np.zeros_like(x_f[np.newaxis, :]), axis=0)\n                    rec_dict['timestamp'] = np.append(rec_dict['timestamp'], msg.header.stamp.to_sec())\n                    rec_dict['dt'] = np.append(rec_dict['dt'], dt)\n                    rec_dict['state_pred'] = np.append(rec_dict['state_pred'], x_pred, axis=0)\n                    rec_dict['error'] = np.append(rec_dict['error'], x_f - x_pred, axis=0)\n\n            last_state_msg = msg\n\n    # Save datasets\n    for key in rec_dict.keys():\n        print(key, \" \", rec_dict[key].shape)\n        rec_dict[key] = jsonify(rec_dict[key])\n    df = pd.DataFrame(rec_dict)\n    df.to_csv(save_file, index=True, header=True)\n\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\"--quad\", type=str, default=\"kingfisher\",\n                        help=\"Name of the quad.\")\n\n    input_arguments = parser.parse_args()\n\n    ds_name = Conf.ds_name\n    simulation_options = Conf.ds_metadata\n\n    quad = custom_quad_param_loader(input_arguments.quad)\n    quad_mpc = Quad3DMPC(quad)\n\n    system_identification(quad_mpc, ds_name, simulation_options)"
  },
  {
    "path": "ros_dd_mpc/src/quad_mpc/__init__.py",
    "content": ""
  },
  {
    "path": "ros_dd_mpc/src/quad_mpc/create_ros_dd_mpc.py",
    "content": "\"\"\" Class for interfacing the data-augmented MPC with ROS.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nfrom src.quad_mpc.quad_3d import Quadrotor3D\nfrom src.quad_mpc.quad_3d_mpc import Quad3DMPC\nimport numpy as np\nimport os\nimport yaml\n\n\ndef custom_quad_param_loader(quad_name):\n\n    this_path = os.path.dirname(os.path.realpath(__file__))\n    params_file = os.path.join(this_path, '..', '..', 'config', quad_name + '.yaml')\n\n    # Get parameters for drone\n    with open(params_file, \"r\") as stream:\n        attrib = yaml.safe_load(stream)\n\n    quad = Quadrotor3D(noisy=False, drag=False, payload=False, motor_noise=False)\n    quad.mass = float(attrib['mass']) + (float(attrib['mass_rotor']) if 'mass_rotor' in attrib else 0) * 4\n    quad.J = np.array(attrib['inertia'])\n\n    quad.thrust_map = attrib[\"thrust_map\"]\n\n    if 'thrust_max' in attrib:\n        quad.max_thrust = attrib['thrust_max']\n    else:\n        float(attrib[\"motor_omega_max\"]) ** 2 * quad.thrust_map[0]\n    quad.c = float(attrib['kappa'])\n\n    if 'arm_length' in attrib and attrib['rotors_config'] == 'cross':\n        quad.length = float(attrib['arm_length'])\n        h = np.cos(np.pi / 4) * quad.length\n        quad.x_f = np.array([h, -h, -h, h])\n        quad.y_f = np.array([-h, h, -h, h])\n    elif 'arm_length' in attrib and attrib['rotors_config'] == 'plus':\n        quad.length = float(attrib['arm_length'])\n        quad.x_f = np.array([0, 0, -quad.length, quad.length])\n        quad.y_f = np.array([-quad.length, quad.length, 0, 0])\n    else:\n        tbm_fr = np.array(attrib['tbm_fr'])\n        tbm_bl = np.array(attrib['tbm_bl'])\n        tbm_br = np.array(attrib['tbm_br'])\n        tbm_fl = np.array(attrib['tbm_fl'])\n        quad.length = np.linalg.norm(tbm_fr)\n        quad.x_f = np.array([tbm_fr[0], tbm_bl[0], tbm_br[0], tbm_fl[0]])\n        quad.y_f = np.array([tbm_fr[1], tbm_bl[1], tbm_br[1], tbm_fl[1]])\n    quad.z_l_tau = np.array([-quad.c, -quad.c, quad.c, quad.c])\n\n    if 'motor_tau' in attrib:\n        quad.motor_tau = attrib['motor_tau']\n\n    if 'comm_delay' in attrib:\n        quad.comm_delay = attrib['comm_delay']\n\n    return quad\n\n\nclass ROSDDMPC:\n    def __init__(self, t_horizon, n_mpc_nodes, opt_dt, quad_name, point_reference=False, models=None,\n                 model_conf=None, rdrv=None):\n\n        quad = custom_quad_param_loader(quad_name)\n\n        # Initialize quad MPC\n        if point_reference:\n            acados_config = {\n                \"solver_type\": \"SQP\",\n                \"terminal_cost\": True\n            }\n        else:\n            acados_config = {\n                \"solver_type\": \"SQP_RTI\",\n                \"terminal_cost\": False\n            }\n\n        q_diagonal = np.array([0.5, 0.5, 0.5, 0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.01, 0.01, 0.01])\n        r_diagonal = np.array([1.0, 1.0, 1.0, 1.0])\n\n        q_mask = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]).T\n\n        quad_mpc = Quad3DMPC(quad, t_horizon=t_horizon, optimization_dt=opt_dt, n_nodes=n_mpc_nodes,\n                             pre_trained_models=models, model_conf=model_conf, model_name=quad_name,\n                             solver_options=acados_config,\n                             q_mask=q_mask, q_cost=q_diagonal, r_cost=r_diagonal, rdrv_d_mat=rdrv)\n\n        self.quad_name = quad_name\n        self.quad = quad\n        self.quad_mpc = quad_mpc\n\n        # Last optimization\n        self.last_w = None\n\n    def set_state(self, x):\n        \"\"\"\n        Set quadrotor state estimate from an odometry measurement\n        :param x: measured state from odometry. List with 13 components with the format: [p_xyz, q_wxyz, v_xyz, w_xyz]\n        \"\"\"\n\n        self.quad.set_state(x)\n\n    def set_gp_state(self, x):\n        \"\"\"\n        Set a quadrotor state estimate from an odometry measurement. While the input state in the function set_state()\n        will be used as initial state for the MPC optimization, the input state of this function will be used to\n        evaluate the GP's. If this function is never called, then the GP's will be evaluated with the state from\n        set_state() too.\n        :param x: measured state from odometry. List with 13 components with the format: [p_xyz, q_wxyz, v_xyz, w_xyz]\n        \"\"\"\n\n        self.quad.set_gp_state(x)\n\n    def set_reference(self, x_ref, u_ref):\n        \"\"\"\n        Set a reference state for the optimizer.\n        :param x_ref: list with 4 sub-components (position, angle quaternion, velocity, body rate). If these four\n        are lists, then this means a single target point is used. If they are Nx3 and Nx4 (for quaternion) numpy arrays,\n        then they are interpreted as a sequence of N tracking points.\n        :param u_ref: Optional target for the optimized control inputs\n        \"\"\"\n        return self.quad_mpc.set_reference(x_reference=x_ref, u_reference=u_ref)\n\n    def optimize(self, model_data):\n        w_opt, x_opt = self.quad_mpc.optimize(use_model=model_data, return_x=True)\n\n        return w_opt, x_opt\n"
  },
  {
    "path": "ros_dd_mpc/src/quad_mpc/quad_3d.py",
    "content": "\"\"\" Implementation of the Simplified Simulator and its quadrotor dynamics.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nfrom math import sqrt\nimport numpy as np\nfrom src.utils.utils import quaternion_to_euler, skew_symmetric, v_dot_q, unit_quat, quaternion_inverse\n\n\nclass Quadrotor3D:\n\n    def __init__(self, noisy=False, drag=False, payload=False, motor_noise=False):\n        \"\"\"\n        Initialization of the 3D quadrotor class\n        :param noisy: Whether noise is used in the simulation\n        :type noisy: bool\n        :param drag: Whether to simulate drag or not.\n        :type drag: bool\n        :param payload: Whether to simulate a payload force in the simulation\n        :type payload: bool\n        :param motor_noise: Whether non-gaussian noise is considered in the motor inputs\n        :type motor_noise: bool\n        \"\"\"\n\n        # Either 'x' or '+'. The xy configuration of the thrusters in the body plane.\n        configuration = 'x'\n\n        # Maximum thrust in Newtons of a thruster when rotating at maximum speed.\n        self.max_thrust = 20\n        self.thrust_map = 1e-6\n\n        # Motor time constant\n        self.motor_tau = 0.\n\n        # Communication delay\n        self.comm_delay = 0.\n\n        # System state space\n        self.pos = np.zeros((3,))\n        self.vel = np.zeros((3,))\n        self.angle = np.array([1., 0., 0., 0.])  # Quaternion format: qw, qx, qy, qz\n        self.a_rate = np.zeros((3,))\n\n        # System state space for GP evaluation.\n        self.gp_pos = None\n        self.gp_vel = None\n        self.gp_angle = None\n        self.gp_a_rate = None\n\n        # Input constraints\n        self.max_input_value = 1  # Motors at full thrust\n        self.min_input_value = 0  # Motors turned off\n\n        # Quadrotor intrinsic parameters\n        self.J = np.array([.03, .03, .06])  # N m s^2 = kg m^2\n        self.mass = 1.0  # kg\n\n        # Length of motor to CoG segment\n        self.length = 0.47 / 2  # m\n\n        # Positions of thrusters\n        if configuration == '+':\n            self.x_f = np.array([self.length, 0, -self.length, 0])\n            self.y_f = np.array([0, self.length, 0, -self.length])\n        elif configuration == 'x':\n            h = np.cos(np.pi / 4) * self.length\n            self.x_f = np.array([h, -h, -h, h])\n            self.y_f = np.array([-h, -h, h, h])\n\n        # For z thrust torque calculation\n        self.c = 0.013  # m   (z torque generated by each motor)\n        self.z_l_tau = np.array([-self.c, self.c, -self.c, self.c])\n\n        # Gravity vector\n        self.g = np.array([[0], [0], [9.8066]])  # m s^-2\n\n        # Actuation thrusts\n        self.u_noiseless = np.array([0.0, 0.0, 0.0, 0.0])\n        self.u = np.array([0.0, 0.0, 0.0, 0.0])  # N\n\n        # Drag coefficients [kg / m]\n        self.rotor_drag_xy = 0.3\n        self.rotor_drag_z = 0.0  # No rotor drag in the z dimension\n        self.rotor_drag = np.array([self.rotor_drag_xy, self.rotor_drag_xy, self.rotor_drag_z])[:, np.newaxis]\n        self.aero_drag = 0.08\n\n        self.drag = drag\n        self.noisy = noisy\n        self.motor_noise = motor_noise\n\n        self.payload_mass = 0.3  # kg\n        self.payload_mass = self.payload_mass * payload\n\n    def set_state(self, *args, **kwargs):\n        if len(args) != 0:\n            assert len(args) == 1 and len(args[0]) == 13\n            self.pos[0], self.pos[1], self.pos[2], \\\n            self.angle[0], self.angle[1], self.angle[2], self.angle[3], \\\n            self.vel[0], self.vel[1], self.vel[2], \\\n            self.a_rate[0], self.a_rate[1], self.a_rate[2] \\\n                = args[0]\n\n        else:\n            self.pos = kwargs[\"pos\"]\n            self.angle = kwargs[\"angle\"]\n            self.vel = kwargs[\"vel\"]\n            self.a_rate = kwargs[\"rate\"]\n\n    def set_gp_state(self, *args, **kwargs):\n        if self.gp_pos is None:\n            # Instantiate these variables for the first time\n            self.gp_pos = np.zeros((3,))\n            self.gp_vel = np.zeros((3,))\n            self.gp_angle = np.array([1., 0., 0., 0.])  # Quaternion format: qw, qx, qy, qz\n            self.gp_a_rate = np.zeros((3,))\n\n        if len(args) != 0:\n            assert len(args) == 1 and len(args[0]) == 13\n            self.gp_pos[0], self.gp_pos[1], self.gp_pos[2], \\\n            self.gp_angle[0], self.gp_angle[1], self.gp_angle[2], self.gp_angle[3], \\\n            self.gp_vel[0], self.gp_vel[1], self.gp_vel[2], \\\n            self.gp_a_rate[0], self.gp_a_rate[1], self.gp_a_rate[2] \\\n                = args[0]\n\n        else:\n            self.gp_pos = kwargs[\"pos\"]\n            self.gp_angle = kwargs[\"angle\"]\n            self.gp_vel = kwargs[\"vel\"]\n            self.gp_a_rate = kwargs[\"rate\"]\n\n    def get_state(self, quaternion=False, stacked=False):\n\n        if quaternion and not stacked:\n            return [self.pos, self.angle, self.vel, self.a_rate]\n        if quaternion and stacked:\n            return [self.pos[0], self.pos[1], self.pos[2], self.angle[0], self.angle[1], self.angle[2], self.angle[3],\n                    self.vel[0], self.vel[1], self.vel[2], self.a_rate[0], self.a_rate[1], self.a_rate[2]]\n\n        angle = quaternion_to_euler(self.angle)\n        if not quaternion and stacked:\n            return [self.pos[0], self.pos[1], self.pos[2], angle[0], angle[1], angle[2],\n                    self.vel[0], self.vel[1], self.vel[2], self.a_rate[0], self.a_rate[1], self.a_rate[2]]\n        return [self.pos, angle, self.vel, self.a_rate]\n\n    def get_gp_state(self, quaternion=False, stacked=False):\n\n        if self.gp_pos is None:\n            return None\n\n        if quaternion and not stacked:\n            return [self.gp_pos, self.gp_angle, self.gp_vel, self.gp_a_rate]\n        if quaternion and stacked:\n            return [self.gp_pos[0], self.gp_pos[1], self.gp_pos[2],\n                    self.gp_angle[0], self.gp_angle[1], self.gp_angle[2], self.gp_angle[3],\n                    self.gp_vel[0], self.gp_vel[1], self.gp_vel[2],\n                    self.gp_a_rate[0], self.gp_a_rate[1], self.gp_a_rate[2]]\n\n        gp_angle = quaternion_to_euler(self.gp_angle)\n        if not quaternion and stacked:\n            return [self.gp_pos[0], self.gp_pos[1], self.gp_pos[2],\n                    gp_angle[0], gp_angle[1], gp_angle[2],\n                    self.gp_vel[0], self.gp_vel[1], self.gp_vel[2],\n                    self.gp_a_rate[0], self.gp_a_rate[1], self.gp_a_rate[2]]\n        return [self.gp_pos, gp_angle, self.gp_vel, self.gp_a_rate]\n\n    def get_control(self, noisy=False):\n        if not noisy:\n            return self.u_noiseless\n        else:\n            return self.u\n\n    def update(self, u, dt):\n        \"\"\"\n        Runge-Kutta 4th order dynamics integration\n\n        :param u: 4-dimensional vector with components between [0.0, 1.0] that represent the activation of each motor.\n        :param dt: time differential\n        \"\"\"\n\n        # Clip inputs\n        for i, u_i in enumerate(u):\n            self.u_noiseless[i] = max(min(u_i, self.max_input_value), self.min_input_value)\n\n        # Apply noise to inputs (uniformly distributed noise with standard deviation proportional to input magnitude)\n        if self.motor_noise:\n            for i, u_i in enumerate(self.u_noiseless):\n                std = 0.02 * sqrt(u_i)\n                noise_u = np.random.normal(loc=0.1 * (u_i / 1.3) ** 2, scale=std)\n                self.u[i] = max(min(u_i - noise_u, self.max_input_value), self.min_input_value) * self.max_thrust\n        else:\n            self.u = self.u_noiseless * self.max_thrust\n\n        # Generate disturbance forces / torques\n        if self.noisy:\n            f_d = np.random.normal(size=(3, 1), scale=10 * dt)\n            t_d = np.random.normal(size=(3, 1), scale=10 * dt)\n        else:\n            f_d = np.zeros((3, 1))\n            t_d = np.zeros((3, 1))\n\n        x = self.get_state(quaternion=True, stacked=False)\n\n        # RK4 integration\n        k1 = [self.f_pos(x), self.f_att(x), self.f_vel(x, self.u, f_d), self.f_rate(x, self.u, t_d)]\n        x_aux = [x[i] + dt / 2 * k1[i] for i in range(4)]\n        k2 = [self.f_pos(x_aux), self.f_att(x_aux), self.f_vel(x_aux, self.u, f_d), self.f_rate(x_aux, self.u, t_d)]\n        x_aux = [x[i] + dt / 2 * k2[i] for i in range(4)]\n        k3 = [self.f_pos(x_aux), self.f_att(x_aux), self.f_vel(x_aux, self.u, f_d), self.f_rate(x_aux, self.u, t_d)]\n        x_aux = [x[i] + dt * k3[i] for i in range(4)]\n        k4 = [self.f_pos(x_aux), self.f_att(x_aux), self.f_vel(x_aux, self.u, f_d), self.f_rate(x_aux, self.u, t_d)]\n        x = [x[i] + dt * (1.0 / 6.0 * k1[i] + 2.0 / 6.0 * k2[i] + 2.0 / 6.0 * k3[i] + 1.0 / 6.0 * k4[i]) for i in\n             range(4)]\n\n        # Ensure unit quaternion\n        x[1] = unit_quat(x[1])\n\n        self.pos, self.angle, self.vel, self.a_rate = x\n\n    def f_pos(self, x):\n        \"\"\"\n        Time-derivative of the position vector\n        :param x: 4-length array of input state with components: 3D pos, quaternion angle, 3D vel, 3D rate\n        :return: position differential increment (vector): d[pos_x; pos_y]/dt\n        \"\"\"\n\n        vel = x[2]\n        return vel\n\n    def f_att(self, x):\n        \"\"\"\n        Time-derivative of the attitude in quaternion form\n        :param x: 4-length array of input state with components: 3D pos, quaternion angle, 3D vel, 3D rate\n        :return: attitude differential increment (quaternion qw, qx, qy, qz): da/dt\n        \"\"\"\n\n        rate = x[3]\n        angle_quaternion = x[1]\n\n        return 1 / 2 * skew_symmetric(rate).dot(angle_quaternion)\n\n    def f_vel(self, x, u, f_d):\n        \"\"\"\n        Time-derivative of the velocity vector\n        :param x: 4-length array of input state with components: 3D pos, quaternion angle, 3D vel, 3D rate\n        :param u: control input vector (4-dimensional): [trust_motor_1, ..., thrust_motor_4]\n        :param f_d: disturbance force vector (3-dimensional)\n        :return: 3D velocity differential increment (vector): d[vel_x; vel_y; vel_z]/dt\n        \"\"\"\n\n        a_thrust = np.array([[0], [0], [np.sum(u)]]) / self.mass\n\n        if self.drag:\n            # Transform velocity to body frame\n            v_b = v_dot_q(x[2], quaternion_inverse(x[1]))[:, np.newaxis]\n            # Compute aerodynamic drag acceleration in world frame\n            a_drag = -self.aero_drag * v_b ** 2 * np.sign(v_b) / self.mass\n            # Add rotor drag\n            a_drag -= self.rotor_drag * v_b / self.mass\n            # Transform drag acceleration to world frame\n            a_drag = v_dot_q(a_drag, x[1])\n        else:\n            a_drag = np.zeros((3, 1))\n\n        angle_quaternion = x[1]\n\n        a_payload = -self.payload_mass * self.g / self.mass\n\n        return np.squeeze(-self.g + a_payload + a_drag + v_dot_q(a_thrust + f_d / self.mass, angle_quaternion))\n\n    def f_rate(self, x, u, t_d):\n        \"\"\"\n        Time-derivative of the angular rate\n        :param x: 4-length array of input state with components: 3D pos, quaternion angle, 3D vel, 3D rate\n        :param u: control input vector (4-dimensional): [trust_motor_1, ..., thrust_motor_4]\n        :param t_d: disturbance torque (3D)\n        :return: angular rate differential increment (scalar): dr/dt\n        \"\"\"\n\n        rate = x[3]\n        return np.array([\n            1 / self.J[0] * (u.dot(self.y_f) + t_d[0] + (self.J[1] - self.J[2]) * rate[1] * rate[2]),\n            1 / self.J[1] * (-u.dot(self.x_f) + t_d[1] + (self.J[2] - self.J[0]) * rate[2] * rate[0]),\n            1 / self.J[2] * (u.dot(self.z_l_tau) + t_d[2] + (self.J[0] - self.J[1]) * rate[0] * rate[1])\n        ]).squeeze()\n"
  },
  {
    "path": "ros_dd_mpc/src/quad_mpc/quad_3d_mpc.py",
    "content": "\"\"\" Implementation of the data-augmented MPC for quadrotors.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport numpy as np\nimport torch\n\nfrom src.quad_mpc.quad_3d_optimizer import Quad3DOptimizer\nfrom src.model_fitting.gp_common import restore_gp_regressors\nfrom src.utils.quad_3d_opt_utils import simulate_plant, uncertainty_forward_propagation\nfrom src.utils.utils import make_bx_matrix\n\n\nclass Quad3DMPC:\n    def __init__(self, my_quad, t_horizon=1.0, n_nodes=5, q_cost=None, r_cost=None,\n                 optimization_dt=5e-2, simulation_dt=5e-4, pre_trained_models=None, model_name=\"my_quad\", q_mask=None,\n                 solver_options=None, rdrv_d_mat=None, model_conf=None):\n        \"\"\"\n        :param my_quad: Quadrotor3D simulator object\n        :type my_quad: Quadrotor3D\n        :param t_horizon: time horizon for optimization loop MPC controller\n        :param n_nodes: number of MPC nodes\n        :param optimization_dt: time step between two successive optimizations intended to be used.\n        :param simulation_dt: discretized time-step for the quadrotor simulation\n        :param pre_trained_models: additional pre-trained GP regressors to be combined with nominal model in the MPC\n        :param q_cost: diagonal of Q matrix for LQR cost of MPC cost function. Must be a numpy array of length 13.\n        :param r_cost: diagonal of R matrix for LQR cost of MPC cost function. Must be a numpy array of length 4.\n        :param q_mask: Optional boolean mask that determines which variables from the state compute towards the\n        cost function. In case no argument is passed, all variables are weighted.\n        :param solver_options: Optional set of extra options dictionary for acados solver.\n        :param rdrv_d_mat: 3x3 matrix that corrects the drag with a linear model according to Faessler et al. 2018. None\n        if not used\n        \"\"\"\n\n        if rdrv_d_mat is not None:\n            # rdrv is currently not compatible with covariance mode or with GP-MPC.\n            print(\"RDRv mode\")\n            self.rdrv = rdrv_d_mat\n            assert pre_trained_models is None\n        else:\n            self.rdrv = None\n\n        self.quad = my_quad\n        self.simulation_dt = simulation_dt\n        self.optimization_dt = optimization_dt\n\n        # motor commands from last step\n        self.motor_u = np.array([0., 0., 0., 0.])\n\n        self.n_nodes = n_nodes\n        self.t_horizon = t_horizon\n\n        self.mlp = None\n        self.mlp_approx = None\n\n        # Load augmented dynamics model with GP regressor\n        if isinstance(pre_trained_models, torch.nn.Module):\n            self.gp_ensemble = None\n            self.B_x = {}\n            x_dims = len(my_quad.get_state(quaternion=True, stacked=True))\n            pred_dims = [7, 8, 9] + ([10, 11, 12] if model_conf['torque_output'] else [])\n            for y_dim in pred_dims:\n                self.B_x[y_dim] = make_bx_matrix(x_dims, [y_dim])\n                self.mlp = pre_trained_models\n\n        elif pre_trained_models is not None:\n            self.gp_ensemble = restore_gp_regressors(pre_trained_models)\n            x_dims = len(my_quad.get_state(quaternion=True, stacked=True))\n            self.B_x = {}\n            for y_dim in self.gp_ensemble.gp.keys():\n                self.B_x[y_dim] = make_bx_matrix(x_dims, [y_dim])\n\n        else:\n            self.gp_ensemble = None\n            self.B_x = {}  # Selection matrix of the GP regressor-modified system states\n\n        # For MPC optimization use\n        self.quad_opt = Quad3DOptimizer(my_quad, t_horizon=t_horizon, n_nodes=n_nodes,\n                                        q_cost=q_cost, r_cost=r_cost,\n                                        B_x=self.B_x, gp_regressors=self.gp_ensemble,\n                                        mlp_regressor=self.mlp, mlp_conf=model_conf,\n                                        model_name=model_name, q_mask=q_mask,\n                                        solver_options=solver_options, rdrv_d_mat=rdrv_d_mat)\n\n    def clear(self):\n        self.quad_opt.clear_acados_model()\n\n    def get_state(self):\n        \"\"\"\n        Returns the state of the drone, with the angle described as a wxyz quaternion\n        :return: 13x1 array with the drone state: [p_xyz, a_wxyz, v_xyz, r_xyz]\n        \"\"\"\n\n        x = np.expand_dims(self.quad.get_state(quaternion=True, stacked=True), 1)\n        return x\n\n    def set_reference(self, x_reference, u_reference=None):\n        \"\"\"\n        Sets a target state for the MPC optimizer\n        :param x_reference: list with 4 sub-components (position, angle quaternion, velocity, body rate). If these four\n        are lists, then this means a single target point is used. If they are Nx3 and Nx4 (for quaternion) numpy arrays,\n        then they are interpreted as a sequence of N tracking points.\n        :param u_reference: Optional target for the optimized control inputs\n        \"\"\"\n\n        if isinstance(x_reference[0], list):\n            # Target state is just a point\n            return self.quad_opt.set_reference_state(x_reference, u_reference)\n        else:\n            # Target state is a sequence\n            return self.quad_opt.set_reference_trajectory(x_reference, u_reference)\n\n    def optimize(self, use_model=0, return_x=False):\n        \"\"\"\n        Runs MPC optimization to reach the pre-set target.\n        :param use_model: Integer. Select which dynamics model to use from the available options.\n        :param return_x: bool, whether to also return the optimized sequence of states alongside with the controls.\n\n        :return: 4*m vector of optimized control inputs with the format: [u_1(0), u_2(0), u_3(0), u_4(0), u_1(1), ...,\n        u_3(m-1), u_4(m-1)]. If return_x is True, will also return a vector of shape N+1 x 13 containing the optimized\n        state prediction.\n        \"\"\"\n\n        quad_current_state = self.quad.get_state(quaternion=True, stacked=True)\n        quad_gp_state = self.quad.get_gp_state(quaternion=True, stacked=True)\n\n        # Remove rate state for simplified model NLP\n        out_out = self.quad_opt.run_optimization(quad_current_state, use_model=use_model, return_x=return_x,\n                                                 gp_regression_state=quad_gp_state)\n        return out_out\n\n    def simulate(self, ref_u):\n        \"\"\"\n        Runs the simulation step for the dynamics model of the quadrotor 3D.\n\n        :param ref_u: 4-length reference vector of control inputs\n        \"\"\"\n\n        # Simulate step\n        self.quad.update(ref_u, self.simulation_dt)\n\n    def simulate_plant(self, w_opt, t_horizon=None, dt_vec=None, progress_bar=False):\n        \"\"\"\n        Given a sequence of n inputs, evaluates the simulated discrete-time plant model n steps into the future. The\n        current drone state will not be changed by calling this method.\n        :param w_opt: sequence of control n x m control inputs, where n is the number of steps and m is the\n        dimensionality of a control input.\n        :param t_horizon: time corresponding to the duration of the n control inputs. In the case that the w_opt comes\n        from an MPC optimization, this parameter should be the MPC time horizon.\n        :param dt_vec: a vector of timestamps, the same length as w_opt, corresponding to the total time each input is\n        applied.\n        :param progress_bar: boolean - whether to show a progress bar on the console or not.\n        :return: the sequence of simulated quadrotor states.\n        \"\"\"\n\n        if t_horizon is None and dt_vec is None:\n            t_horizon = self.t_horizon\n\n        return simulate_plant(self.quad, w_opt, simulation_dt=self.simulation_dt, simulate_func=self.simulate,\n                              t_horizon=t_horizon, dt_vec=dt_vec, progress_bar=progress_bar)\n\n    def forward_prop(self, x_0, w_opt, cov_0=None, t_horizon=None, dt=None, use_gp=False, use_model=0):\n        \"\"\"\n        Computes the forward propagation of the state using an MPC-optimized control input sequence.\n        :param x_0: Initial n-length state vector\n        :param w_opt: Optimized m*4-length sequence of control inputs from MPC, with the vector format:\n        [u_1(1), u_2(1), u_3(1), u_4(1), ..., u_3(m), u_4(m)]\n        :param cov_0: Initial covariance estimate (default 0). Can be either a positive semi-definite matrix or a\n        1D vector, which will be the diagonal of the covariance matrix. In both cases, the resulting covariance matrix\n        must be nxn shape, where n is the length of x_0.\n        :param t_horizon: time span of the control inputs (default is the time horizon of the MPC)\n        :param dt: Optional. Vector of length m, with the corresponding integration time for every control input in\n        w_opt. If none is provided, the default integration step is used.\n        :param use_gp: Boolean, whether to use GP regressors when performing the integration or not.\n        :param use_model: Integer. Select which dynamics model to use from the available options.\n        :return: An m x n array of propagated (expected) state estimates, and an m x n x n array with the m propagated\n        covariance matrices.\n        \"\"\"\n\n        # Default parameters\n        if dt is not None:\n            assert len(dt) == len(w_opt) / 4\n            t_horizon = np.sum(dt)\n        if t_horizon is None:\n            t_horizon = self.t_horizon\n        if cov_0 is None:\n            cov_0 = np.diag(np.zeros_like(np.squeeze(x_0)))\n        elif len(cov_0.shape) == 1:\n            cov_0 = np.diag(cov_0)\n        elif len(cov_0.shape) > 2:\n            TypeError(\"The initial covariance value must be either a 1D vector of a 2D matrix\")\n\n        gp_ensemble = self.gp_ensemble if use_gp else None\n\n        # Compute forward propagation of state pdf\n        return uncertainty_forward_propagation(x_0, w_opt, t_horizon=t_horizon, covar=cov_0, dt=dt,\n                                               discrete_dynamics_f=self.quad_opt.discretize_f_and_q,\n                                               dynamics_jac_f=self.quad_opt.quad_xdot_jac,\n                                               B_x=self.B_x, gp_regressors=gp_ensemble,\n                                               use_model=use_model, m_integration_steps=1)\n\n    @staticmethod\n    def reshape_input_sequence(u_seq):\n        \"\"\"\n        Reshapes the an output trajectory from the 1D format: [u_0(0), u_1(0), ..., u_0(n-1), u_1(n-1), ..., u_m-1(n-1)]\n        to a 2D n x m array.\n        :param u_seq: 1D input sequence\n        :return: 2D input sequence, were n is the number of control inputs and m is the dimension of a single input.\n        \"\"\"\n\n        k = np.arange(u_seq.shape[0] / 4, dtype=int)\n        u_seq = np.atleast_2d(u_seq).T if len(u_seq.shape) == 1 else u_seq\n        u_seq = np.concatenate((u_seq[4 * k], u_seq[4 * k + 1], u_seq[4 * k + 2], u_seq[4 * k + 3]), 1)\n        return u_seq\n\n    def reset(self):\n        return\n"
  },
  {
    "path": "ros_dd_mpc/src/quad_mpc/quad_3d_optimizer.py",
    "content": "\"\"\" Implementation of the nonlinear optimizer for the data-augmented MPC.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport sys\nimport shutil\nimport casadi as cs\nimport numpy as np\nfrom copy import copy\n\nimport torch\nfrom acados_template import AcadosOcp, AcadosOcpSolver, AcadosModel\n\nfrom config.configuration_parameters import GroundEffectMapConfig\nfrom src.quad_mpc.quad_3d import Quadrotor3D\nfrom src.model_fitting.gp import GPEnsemble\nfrom src.utils.ground_map import GroundMapWithBox\nfrom src.utils.utils import skew_symmetric, v_dot_q, safe_mkdir_recursive, quaternion_inverse\nfrom src.utils.quad_3d_opt_utils import discretize_dynamics_and_cost\n\n\nclass Quad3DOptimizer:\n    def __init__(self, quad, t_horizon=1, n_nodes=20,\n                 q_cost=None, r_cost=None, q_mask=None,\n                 B_x=None, gp_regressors=None, mlp_conf=None, mlp_regressor=None, rdrv_d_mat=None,\n                 model_name=\"quad_3d_acados_mpc\", solver_options=None):\n        \"\"\"\n        :param quad: quadrotor object\n        :type quad: Quadrotor3D\n        :param t_horizon: time horizon for MPC optimization\n        :param n_nodes: number of optimization nodes until time horizon\n        :param q_cost: diagonal of Q matrix for LQR cost of MPC cost function. Must be a numpy array of length 12.\n        :param r_cost: diagonal of R matrix for LQR cost of MPC cost function. Must be a numpy array of length 4.\n        :param B_x: dictionary of matrices that maps the outputs of the gp regressors to the state space.\n        :param gp_regressors: Gaussian Process ensemble for correcting the nominal model\n        :type gp_regressors: GPEnsemble\n        :param q_mask: Optional boolean mask that determines which variables from the state compute towards the cost\n        function. In case no argument is passed, all variables are weighted.\n        :param solver_options: Optional set of extra options dictionary for solvers.\n        :param rdrv_d_mat: 3x3 matrix that corrects the drag with a linear model according to Faessler et al. 2018. None\n        if not used\n        \"\"\"\n\n        # Weighted squared error loss function q = (p_xyz, a_xyz, v_xyz, r_xyz), r = (u1, u2, u3, u4)\n        if q_cost is None:\n            q_cost = np.array([10, 10, 10, 0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])\n        if r_cost is None:\n            r_cost = np.array([0.1, 0.1, 0.1, 0.1])\n\n        self.T = t_horizon  # Time horizon\n        self.N = n_nodes  # number of control nodes within horizon\n\n        self.quad = quad\n\n        self.max_u = quad.max_input_value\n        self.min_u = quad.min_input_value\n\n        # Declare model variables\n        self.p = cs.MX.sym('p', 3)  # position\n        self.q = cs.MX.sym('a', 4)  # angle quaternion (wxyz)\n        self.v = cs.MX.sym('v', 3)  # velocity\n        self.r = cs.MX.sym('r', 3)  # angle rate\n\n        # Full state vector (13-dimensional)\n        self.x = cs.vertcat(self.p, self.q, self.v, self.r)\n        self.state_dim = 13\n\n        # Control input vector\n        u1 = cs.MX.sym('u1')\n        u2 = cs.MX.sym('u2')\n        u3 = cs.MX.sym('u3')\n        u4 = cs.MX.sym('u4')\n        self.u = cs.vertcat(u1, u2, u3, u4)\n\n        # Nominal model equations symbolic function (no GP)\n        self.quad_xdot_nominal = self.quad_dynamics(rdrv_d_mat)\n\n        # Linearized model dynamics symbolic function\n        self.quad_xdot_jac = self.linearized_quad_dynamics()\n\n        # Initialize objective function, 0 target state and integration equations\n        self.L = None\n        self.target = None\n        self.u_target = None\n\n        self.mlp_regressor = None\n        self.mlp_conf = None\n        self.x_opt_acados = None\n        self.w_opt_acados = None\n        self.mlp_params = None\n\n        # Check if GP ensemble has an homogeneous feature space (if actual Ensemble)\n        if gp_regressors is not None and gp_regressors.homogeneous:\n            self.gp_reg_ensemble = gp_regressors\n            self.B_x = [B_x[dim] for dim in gp_regressors.dim_idx]\n            self.B_x = np.squeeze(np.stack(self.B_x, axis=1))\n            self.B_x = self.B_x[:, np.newaxis] if len(self.B_x.shape) == 1 else self.B_x\n            self.B_z = gp_regressors.B_z\n        elif gp_regressors and gp_regressors.no_ensemble:\n            # If not homogeneous, we have to treat each z feature vector independently\n            self.gp_reg_ensemble = gp_regressors\n            self.B_x = [B_x[dim] for dim in gp_regressors.dim_idx]\n            self.B_x = np.squeeze(np.stack(self.B_x, axis=1))\n            self.B_x = self.B_x[:, np.newaxis] if len(self.B_x.shape) == 1 else self.B_x\n            self.B_z = gp_regressors.B_z\n        elif mlp_regressor is not None:\n            self.gp_reg_ensemble = None\n            self.mlp_regressor = mlp_regressor\n            self.B_x = np.hstack(list(B_x.values()))\n            self.mlp_conf = mlp_conf\n        else:\n            self.gp_reg_ensemble = None\n\n        # Declare model variables for GP prediction (only used in real quadrotor flight with EKF estimator).\n        # Will be used as initial state for GP prediction as Acados parameters.\n        self.gp_p = cs.MX.sym('gp_p', 3)\n        self.gp_q = cs.MX.sym('gp_a', 4)\n        self.gp_v = cs.MX.sym('gp_v', 3)\n        self.gp_r = cs.MX.sym('gp_r', 3)\n        self.gp_x = cs.vertcat(self.gp_p, self.gp_q, self.gp_v, self.gp_r)\n\n        # The trigger variable is used to tell ACADOS to use the additional GP state estimate in the first optimization\n        # node and the regular integrated state in the rest\n        self.trigger_var = cs.MX.sym('trigger', 1)\n\n        # Build full model. Will have 13 variables. self.dyn_x contains the symbolic variable that\n        # should be used to evaluate the dynamics function. It corresponds to self.x if there are no GP's, or\n        # self.x_with_gp otherwise\n        acados_models, nominal_with_gp = self.acados_setup_model(\n            self.quad_xdot_nominal(x=self.x, u=self.u)['x_dot'], model_name)\n\n        # Convert dynamics variables to functions of the state and input vectors\n        self.quad_xdot = {}\n        for dyn_model_idx in nominal_with_gp.keys():\n            dyn = nominal_with_gp[dyn_model_idx]\n            self.quad_xdot[dyn_model_idx] = cs.Function(\n                'x_dot', [self.x, self.u], [dyn], ['x', 'u'], ['x_dot'], {'allow_free': True})\n\n        # ### Setup and compile Acados OCP solvers ### #\n        self.acados_ocp_solver = {}\n\n        # Check if GP's have been loaded\n        self.with_gp = self.gp_reg_ensemble is not None\n        self.with_mlp = self.mlp_regressor is not None\n\n        # Add one more weight to the rotation (use quaternion norm weighting in acados)\n        q_diagonal = np.concatenate((q_cost[:3], np.mean(q_cost[3:6])[np.newaxis], q_cost[3:]))\n        if q_mask is not None:\n            q_mask = np.concatenate((q_mask[:3], np.zeros(1), q_mask[3:]))\n            q_diagonal *= q_mask\n\n        # Ensure current working directory is current folder\n        os.chdir(os.path.dirname(os.path.realpath(__file__)))\n        self.acados_models_dir = '../../acados_models'\n        safe_mkdir_recursive(os.path.join(os.getcwd(), self.acados_models_dir))\n\n        for key, key_model in zip(acados_models.keys(), acados_models.values()):\n            nx = key_model.x.size()[0]\n            nu = key_model.u.size()[0]\n            ny = nx + nu\n            n_param = key_model.p.size()[0] if isinstance(key_model.p, cs.MX) else 0\n\n            acados_source_path = os.environ['ACADOS_SOURCE_DIR']\n            sys.path.insert(0, '../common')\n\n            # Create OCP object to formulate the optimization\n            ocp = AcadosOcp()\n            ocp.acados_include_path = acados_source_path + '/include'\n            ocp.acados_lib_path = acados_source_path + '/lib'\n            ocp.model = key_model\n            ocp.dims.N = self.N\n            ocp.solver_options.tf = t_horizon\n\n            # Initialize parameters\n            ocp.dims.np = n_param\n            ocp.parameter_values = np.zeros(n_param)\n\n            ocp.cost.cost_type = 'LINEAR_LS'\n            ocp.cost.cost_type_e = 'LINEAR_LS'\n\n            ocp.cost.W = np.diag(np.concatenate((q_diagonal, r_cost)))\n            ocp.cost.W_e = np.diag(q_diagonal)\n            terminal_cost = 0 if solver_options is None or not solver_options[\"terminal_cost\"] else 1\n            ocp.cost.W_e *= terminal_cost\n\n            ocp.cost.Vx = np.zeros((ny, nx))\n            ocp.cost.Vx[:nx, :nx] = np.eye(nx)\n            ocp.cost.Vu = np.zeros((ny, nu))\n            ocp.cost.Vu[-4:, -4:] = np.eye(nu)\n\n            ocp.cost.Vx_e = np.eye(nx)\n\n            # Initial reference trajectory (will be overwritten)\n            x_ref = np.zeros(nx)\n            ocp.cost.yref = np.concatenate((x_ref, np.array([0.0, 0.0, 0.0, 0.0])))\n            ocp.cost.yref_e = x_ref\n\n            # Initial state (will be overwritten)\n            ocp.constraints.x0 = x_ref\n\n            # Set constraints\n            ocp.constraints.lbu = np.array([self.min_u] * 4)\n            ocp.constraints.ubu = np.array([self.max_u] * 4)\n            ocp.constraints.idxbu = np.array([0, 1, 2, 3])\n\n            # Solver options\n            ocp.solver_options.qp_solver = 'FULL_CONDENSING_HPIPM'\n            ocp.solver_options.hessian_approx = 'GAUSS_NEWTON'\n            ocp.solver_options.integrator_type = 'ERK'\n            ocp.solver_options.print_level = 0\n            ocp.solver_options.nlp_solver_type = 'SQP_RTI' if solver_options is None else solver_options[\"solver_type\"]\n\n            # Compile acados OCP solver if necessary\n            json_file = os.path.join(self.acados_models_dir, key_model.name + '_acados_ocp.json')\n            self.acados_ocp_solver[key] = AcadosOcpSolver(ocp, json_file=json_file)\n\n    def clear_acados_model(self):\n        \"\"\"\n        Removes previous stored acados models to avoid name conflicts.\n        \"\"\"\n\n        json_file = os.path.join(self.acados_models_dir, 'acados_ocp.json')\n        if os.path.exists(json_file):\n            os.remove(os.path.join(os.getcwd(), json_file))\n        compiled_model_dir = os.path.join(os.getcwd(), 'c_generated_code')\n        if os.path.exists(compiled_model_dir):\n            shutil.rmtree(compiled_model_dir)\n\n    def add_missing_states(self, gp_outs):\n        \"\"\"\n        Adds 0's to the GP predictions in case the GP's only regressed a subset of the three velocity states.\n        :param gp_outs: dictionary of GP predictions\n        :type gp_outs: dict\n        :return: The same dictionary as the input but all entries will be 3-dimensional, with 0's added to the right\n        dimensions if they were missing.\n        \"\"\"\n\n        # Dims predicted by the GP. A subset of real_pred_dims\n        pred_dims = self.gp_reg_ensemble.dim_idx\n\n        # All the dims that should be filled in\n        real_pred_dims = [7, 8, 9]\n\n        for i, dim in enumerate(real_pred_dims):\n            if dim not in pred_dims:\n                for k, v in zip(gp_outs.keys(), gp_outs.values()):\n                    if (isinstance(v, cs.MX) or isinstance(v, cs.DM)) and k == 'pred':\n                        gp_outs[k] = cs.vertcat(v[:i], cs.MX.zeros(1), v[i:])\n\n        return gp_outs\n\n    def remove_extra_states(self, vec):\n        # Dims predicted by the GP. A subset of real_pred_dims\n        pred_dims = self.gp_reg_ensemble.dim_idx\n\n        # All the dims that should be filled in\n        real_pred_dims = [7, 8, 9]\n\n        for i, dim in enumerate(real_pred_dims):\n            if dim not in pred_dims:\n                vec = cs.vertcat(vec[:i], vec[i + 1:])\n\n        return vec\n\n    def acados_setup_model(self, nominal, model_name):\n        \"\"\"\n        Builds an Acados symbolic models using CasADi expressions.\n        :param model_name: name for the acados model. Must be different from previously used names or there may be\n        problems loading the right model.\n        :param nominal: CasADi symbolic nominal model of the quadrotor: f(self.x, self.u) = x_dot, dimensions 13x1.\n        :return: Returns a total of three outputs, where m is the number of GP's in the GP ensemble, or 1 if no GP:\n            - A dictionary of m AcadosModel of the GP-augmented quadrotor\n            - A dictionary of m CasADi symbolic nominal dynamics equations with GP mean value augmentations (if with GP)\n        :rtype: dict, dict, cs.MX\n        \"\"\"\n\n        def fill_in_acados_model(x, u, p, dynamics, name):\n\n            x_dot = cs.MX.sym('x_dot', dynamics.shape)\n            f_impl = x_dot - dynamics\n\n            # Dynamics model\n            model = AcadosModel()\n            model.f_expl_expr = dynamics\n            model.f_impl_expr = f_impl\n            model.x = x\n            model.xdot = x_dot\n            model.u = u\n            model.p = p\n            model.name = name\n\n            return model\n\n        acados_models = {}\n        dynamics_equations = {}\n\n        # Run GP inference if GP's available\n        if self.gp_reg_ensemble is not None:\n\n            # Feature vector are the elements of x and u determined by the selection matrix B_z. The trigger var is used\n            # to select the gp-specific state estimate in the first optimization node, and the regular integrated state\n            # in the rest. The computing of the features z is done within the GPEnsemble.\n            gp_x = self.gp_x * self.trigger_var + self.x * (1 - self.trigger_var)\n            #  Transform velocity to body frame\n            v_b = v_dot_q(gp_x[7:10], quaternion_inverse(gp_x[3:7]))\n            gp_x = cs.vertcat(gp_x[:7], v_b, gp_x[10:])\n            gp_u = self.u\n\n            gp_dims = self.gp_reg_ensemble.dim_idx\n\n            # Get number of models in GP\n            for i in range(self.gp_reg_ensemble.n_models):\n                # Evaluate cluster of the GP ensemble\n                cluster_id = {k: [v] for k, v in zip(gp_dims, i * np.ones_like(gp_dims, dtype=int))}\n                outs = self.gp_reg_ensemble.predict(gp_x, gp_u, return_cov=False, gp_idx=cluster_id, return_z=False)\n\n                # Unpack prediction outputs. Transform back to world reference frame\n                outs = self.add_missing_states(outs)\n                gp_means = v_dot_q(outs[\"pred\"], gp_x[3:7])\n                gp_means = self.remove_extra_states(gp_means)\n\n                # Add GP mean prediction\n                dynamics_equations[i] = nominal + cs.mtimes(self.B_x, gp_means)\n\n                x_ = self.x\n                dynamics_ = dynamics_equations[i]\n\n                # Add again the gp augmented dynamics for the GP state\n                dynamics_ = cs.vertcat(dynamics_)\n                dynamics_equations[i] = cs.vertcat(dynamics_equations[i])\n\n                i_name = model_name + \"_domain_\" + str(i)\n\n                params = cs.vertcat(self.gp_x, self.trigger_var)\n                acados_models[i] = fill_in_acados_model(x=x_, u=self.u, p=params, dynamics=dynamics_, name=i_name)\n\n        elif self.mlp_regressor is not None:\n            state = self.gp_x * self.trigger_var + self.x * (1 - self.trigger_var)\n            #  Transform velocity to body frame\n            v_b = v_dot_q(state[7:10], quaternion_inverse(state[3:7]))\n            state = cs.vertcat(state[:7], v_b, state[10:])\n            mlp_in = v_b\n\n            if self.mlp_conf['torque_output']:\n                mlp_in = cs.vertcat(mlp_in, state[10:])\n\n            if self.mlp_conf['u_inp']:\n                mlp_in = cs.vertcat(mlp_in, self.u)\n\n            if self.mlp_conf['ground_map_input']:\n                map_conf = GroundEffectMapConfig\n                map = GroundMapWithBox(np.array(map_conf.box_min),\n                                       np.array(map_conf.box_max),\n                                       map_conf.box_height,\n                                       horizon=map_conf.horizon,\n                                       resolution=map_conf.resolution)\n\n                self._map_res = map_conf.resolution\n\n                self._static_ground_map, self._org_to_map_org = map.at(np.array(map_conf.origin))\n                ground_map_dx = cs.MX(self._static_ground_map)\n\n                idx = cs.DM(np.arange(0, 3, 1))\n\n                x, y, z = state[0], state[1], state[2]\n                orientation = state[3:7]\n\n                x_idxs = cs.floor((x - self._org_to_map_org[0]) / map_conf.resolution) + idx - 1\n                y_idxs = cs.floor((y - self._org_to_map_org[1]) / map_conf.resolution) + idx - 1\n                ground_patch = ground_map_dx[x_idxs, y_idxs]\n\n                relative_ground_patch = z - ground_patch\n                relative_ground_patch = 4 * (cs.fmax(cs.fmin(relative_ground_patch, 0.5), 0.0) - 0.25)\n\n                ground_effect_in = cs.vertcat(cs.reshape(relative_ground_patch, 9, 1), orientation*0)\n\n                mlp_in = cs.vertcat(mlp_in, ground_effect_in)\n\n            if not self.mlp_conf['approximated']:\n                outs = self.mlp_regressor(mlp_in)\n            else:\n                outs = self.mlp_regressor.approx(mlp_in, order=self.mlp_conf['approx_order'], parallel=False)\n\n            if self.mlp_conf['torque_output']:\n                outs_force = outs[:3]\n                outs_torque = outs[3:]\n                mlp_means = cs.vertcat(v_dot_q(outs_force, state[3:7]), outs_torque)\n            else:\n                # Unpack prediction outputs. Transform back to world reference frame\n                mlp_means = v_dot_q(outs, state[3:7])\n\n            # Add GP mean prediction\n            dynamics_equations[0] = nominal + cs.mtimes(self.B_x, mlp_means)\n\n            x_ = self.x\n            dynamics_ = dynamics_equations[0]\n\n            # Add again the gp augmented dynamics for the GP state\n            dynamics_ = cs.vertcat(dynamics_)\n            dynamics_equations[0] = cs.vertcat(dynamics_equations[0])\n\n            i_name = model_name + \"_domain_\" + str(0)\n\n            if not self.mlp_conf['approximated']:\n                params = cs.vertcat(self.gp_x, self.trigger_var)\n            else:\n                params = cs.vertcat(self.gp_x, self.trigger_var,\n                                    self.mlp_regressor.sym_approx_params(order=self.mlp_conf['approx_order'],\n                                                                         flat=True))\n            acados_models[0] = fill_in_acados_model(x=x_, u=self.u, p=params, dynamics=dynamics_, name=i_name)\n\n        else:\n\n            # No available GP so return nominal dynamics\n            dynamics_equations[0] = nominal\n\n            x_ = self.x\n            dynamics_ = nominal\n\n            acados_models[0] = fill_in_acados_model(x=x_, u=self.u, p=[], dynamics=dynamics_, name=model_name)\n\n        return acados_models, dynamics_equations\n\n    def quad_dynamics(self, rdrv_d):\n        \"\"\"\n        Symbolic dynamics of the 3D quadrotor model. The state consists on: [p_xyz, a_wxyz, v_xyz, r_xyz]^T, where p\n        stands for position, a for angle (in quaternion form), v for velocity and r for body rate. The input of the\n        system is: [u_1, u_2, u_3, u_4], i.e. the activation of the four thrusters.\n\n        :param rdrv_d: a 3x3 diagonal matrix containing the D matrix coefficients for a linear drag model as proposed\n        by Faessler et al.\n\n        :return: CasADi function that computes the analytical differential state dynamics of the quadrotor model.\n        Inputs: 'x' state of quadrotor (6x1) and 'u' control input (2x1). Output: differential state vector 'x_dot'\n        (6x1)\n        \"\"\"\n\n        x_dot = cs.vertcat(self.p_dynamics(), self.q_dynamics(), self.v_dynamics(rdrv_d), self.w_dynamics())\n        return cs.Function('x_dot', [self.x[:13], self.u], [x_dot], ['x', 'u'], ['x_dot'])\n\n    def p_dynamics(self):\n        return self.v\n\n    def q_dynamics(self):\n        return 1 / 2 * cs.mtimes(skew_symmetric(self.r), self.q)\n\n    def v_dynamics(self, rdrv_d):\n        \"\"\"\n        :param rdrv_d: a 3x3 diagonal matrix containing the D matrix coefficients for a linear drag model as proposed\n        by Faessler et al. None, if no linear compensation is to be used.\n        \"\"\"\n\n        f_thrust = self.u * self.quad.max_thrust\n        g = cs.vertcat(0.0, 0.0, 9.81)\n        a_thrust = cs.vertcat(0.0, 0.0, f_thrust[0] + f_thrust[1] + f_thrust[2] + f_thrust[3]) / self.quad.mass\n\n        v_dynamics = v_dot_q(a_thrust, self.q) - g\n\n        if rdrv_d is not None:\n            # Velocity in body frame:\n            v_b = v_dot_q(self.v, quaternion_inverse(self.q))\n            rdrv_drag = v_dot_q(cs.mtimes(rdrv_d, v_b), self.q)\n            v_dynamics += rdrv_drag\n\n        return v_dynamics\n\n    def w_dynamics(self):\n        f_thrust = self.u * self.quad.max_thrust\n\n        y_f = cs.MX(self.quad.y_f)\n        x_f = cs.MX(self.quad.x_f)\n        c_f = cs.MX(self.quad.z_l_tau)\n        return cs.vertcat(\n            (cs.mtimes(f_thrust.T, y_f) + (self.quad.J[1] - self.quad.J[2]) * self.r[1] * self.r[2]) / self.quad.J[0],\n            (-cs.mtimes(f_thrust.T, x_f) + (self.quad.J[2] - self.quad.J[0]) * self.r[2] * self.r[0]) / self.quad.J[1],\n            (cs.mtimes(f_thrust.T, c_f) + (self.quad.J[0] - self.quad.J[1]) * self.r[0] * self.r[1]) / self.quad.J[2])\n\n    def linearized_quad_dynamics(self):\n        \"\"\"\n        Jacobian J matrix of the linearized dynamics specified in the function quad_dynamics. J[i, j] corresponds to\n        the partial derivative of f_i(x) wrt x(j).\n\n        :return: a CasADi symbolic function that calculates the 13 x 13 Jacobian matrix of the linearized simplified\n        quadrotor dynamics\n        \"\"\"\n\n        jac = cs.MX(self.state_dim, self.state_dim)\n\n        # Position derivatives\n        jac[0:3, 7:10] = cs.diag(cs.MX.ones(3))\n\n        # Angle derivatives\n        jac[3:7, 3:7] = skew_symmetric(self.r) / 2\n        jac[3, 10:] = 1 / 2 * cs.horzcat(-self.q[1], -self.q[2], -self.q[3])\n        jac[4, 10:] = 1 / 2 * cs.horzcat(self.q[0], -self.q[3], self.q[2])\n        jac[5, 10:] = 1 / 2 * cs.horzcat(self.q[3], self.q[0], -self.q[1])\n        jac[6, 10:] = 1 / 2 * cs.horzcat(-self.q[2], self.q[1], self.q[0])\n\n        # Velocity derivatives\n        a_u = (self.u[0] + self.u[1] + self.u[2] + self.u[3]) * self.quad.max_thrust / self.quad.mass\n        jac[7, 3:7] = 2 * cs.horzcat(a_u * self.q[2], a_u * self.q[3], a_u * self.q[0], a_u * self.q[1])\n        jac[8, 3:7] = 2 * cs.horzcat(-a_u * self.q[1], -a_u * self.q[0], a_u * self.q[3], a_u * self.q[2])\n        jac[9, 3:7] = 2 * cs.horzcat(0, -2 * a_u * self.q[1], -2 * a_u * self.q[1], 0)\n\n        # Rate derivatives\n        jac[10, 10:] = (self.quad.J[1] - self.quad.J[2]) / self.quad.J[0] * cs.horzcat(0, self.r[2], self.r[1])\n        jac[11, 10:] = (self.quad.J[2] - self.quad.J[0]) / self.quad.J[1] * cs.horzcat(self.r[2], 0, self.r[0])\n        jac[12, 10:] = (self.quad.J[0] - self.quad.J[1]) / self.quad.J[2] * cs.horzcat(self.r[1], self.r[0], 0)\n\n        return cs.Function('J', [self.x, self.u], [jac])\n\n    def set_reference_state(self, x_target=None, u_target=None):\n        \"\"\"\n        Sets the target state and pre-computes the integration dynamics with cost equations\n        :param x_target: 13-dimensional target state (p_xyz, a_wxyz, v_xyz, r_xyz)\n        :param u_target: 4-dimensional target control input vector (u_1, u_2, u_3, u_4)\n        \"\"\"\n\n        if x_target is None:\n            x_target = [[0, 0, 0], [1, 0, 0, 0], [0, 0, 0], [0, 0, 0]]\n        if u_target is None:\n            u_target = [0, 0, 0, 0]\n\n        # Set new target state\n        self.target = copy(x_target)\n        self.u_target = copy(u_target)\n\n        ref = np.concatenate([x_target[i] for i in range(4)])\n        #  Transform velocity to body frame\n        v_b = v_dot_q(ref[7:10], quaternion_inverse(ref[3:7]))\n        ref = np.concatenate((ref[:7], v_b, ref[10:]))\n\n        # Determine which dynamics model to use based on the GP optimal input feature region. Should get one for each\n        # output dimension of the GP\n        if self.gp_reg_ensemble is not None:\n            gp_ind = self.gp_reg_ensemble.select_gp(dim=None, x=ref, u=u_target)\n        else:\n            gp_ind = 0\n\n        ref = np.concatenate((ref, u_target))\n\n        for j in range(self.N):\n            self.acados_ocp_solver[gp_ind].set(j, \"yref\", ref)\n        self.acados_ocp_solver[gp_ind].set(self.N, \"yref\", ref[:-4])\n\n        return gp_ind\n\n    def set_reference_trajectory(self, x_target, u_target):\n        \"\"\"\n        Sets the reference trajectory and pre-computes the cost equations for each point in the reference sequence.\n        :param x_target: Nx13-dimensional reference trajectory (p_xyz, angle_wxyz, v_xyz, rate_xyz). It is passed in the\n        form of a 4-length list, where the first element is a Nx3 numpy array referring to the position targets, the\n        second is a Nx4 array referring to the quaternion, two more Nx3 arrays for the velocity and body rate targets.\n        :param u_target: Nx4-dimensional target control input vector (u1, u2, u3, u4)\n        \"\"\"\n\n        if u_target is not None:\n            assert x_target[0].shape[0] == (u_target.shape[0] + 1) or x_target[0].shape[0] == u_target.shape[0]\n\n        # If not enough states in target sequence, append last state until required length is met\n        while x_target[0].shape[0] < self.N + 1:\n            x_target = [np.concatenate((x, np.expand_dims(x[-1, :], 0)), 0) for x in x_target]\n            if u_target is not None:\n                u_target = np.concatenate((u_target, np.expand_dims(u_target[-1, :], 0)), 0)\n\n        stacked_x_target = np.concatenate([x for x in x_target], 1)\n\n        #  Transform velocity to body frame\n        x_mean = stacked_x_target[int(self.N / 2)]\n        v_b = v_dot_q(x_mean[7:10], quaternion_inverse(x_mean[3:7]))\n        x_target_mean = np.concatenate((x_mean[:7], v_b, x_mean[10:]))\n\n        # Determine which dynamics model to use based on the GP optimal input feature region\n        if self.gp_reg_ensemble is not None:\n            gp_ind = self.gp_reg_ensemble.select_gp(dim=None, x=x_target_mean, u=u_target[int(self.N / 2)])\n        else:\n            gp_ind = 0\n\n        self.target = copy(x_target)\n        self.u_target = copy(u_target)\n\n        for j in range(self.N):\n            ref = stacked_x_target[j, :]\n            ref = np.concatenate((ref, u_target[j, :]))\n            self.acados_ocp_solver[gp_ind].set(j, \"yref\", ref)\n        # the last MPC node has only a state reference but no input reference\n        self.acados_ocp_solver[gp_ind].set(self.N, \"yref\", stacked_x_target[self.N, :])\n        return gp_ind\n\n    def discretize_f_and_q(self, t_horizon, n, m=1, i=0, use_gp=True, use_model=0):\n        \"\"\"\n        Discretize the model dynamics and the pre-computed cost function if available.\n        :param t_horizon: time horizon in seconds\n        :param n: number of control steps until time horizon\n        :param m: number of integration steps per control step\n        :param i: Only used for trajectory tracking. Index of cost function to use.\n        :param use_gp: Whether to use the dynamics with the GP correction or not.\n        :param use_model: integer, select which model to use from the available options.\n        :return: the symbolic, discretized dynamics. The inputs of the symbolic function are x0 (the initial state) and\n        p, the control input vector. The outputs are xf (the updated state) and qf. qf is the corresponding cost\n        function of the integration, which is calculated from the pre-computed discrete-time model dynamics (self.L)\n        \"\"\"\n\n        dynamics = self.quad_xdot[use_model] if use_gp else self.quad_xdot_nominal\n\n        # Call with self.x_with_gp even if use_gp=False\n        return discretize_dynamics_and_cost(t_horizon, n, m, self.x, self.u, dynamics, self.L, i)\n\n    def run_optimization(self, initial_state=None, use_model=0, return_x=False, gp_regression_state=None):\n        \"\"\"\n        Optimizes a trajectory to reach the pre-set target state, starting from the input initial state, that minimizes\n        the quadratic cost function and respects the constraints of the system\n\n        :param initial_state: 13-element list of the initial state. If None, 0 state will be used\n        :param use_model: integer, select which model to use from the available options.\n        :param return_x: bool, whether to also return the optimized sequence of states alongside with the controls.\n        :param gp_regression_state: 13-element list of state for GP prediction. If None, initial_state will be used.\n        :return: optimized control input sequence (flattened)\n        \"\"\"\n\n        if initial_state is None:\n            initial_state = [0, 0, 0] + [1, 0, 0, 0] + [0, 0, 0] + [0, 0, 0]\n\n        # Set initial state. Add gp state if needed\n        x_init = initial_state\n        x_init = np.stack(x_init)\n\n        # Set initial condition, equality constraint\n        self.acados_ocp_solver[use_model].set(0, 'lbx', x_init)\n        self.acados_ocp_solver[use_model].set(0, 'ubx', x_init)\n\n        # Set parameters\n        if self.with_gp:\n            gp_state = gp_regression_state if gp_regression_state is not None else initial_state\n            self.acados_ocp_solver[use_model].set(0, 'p', np.array(gp_state + [1]))\n            for j in range(1, self.N):\n                self.acados_ocp_solver[use_model].set(j, 'p', np.array([0.0] * (len(gp_state) + 1)))\n\n        if self.with_mlp:\n            if self.x_opt_acados is None:\n                if isinstance(self.target[0], list):\n                    self.x_opt_acados = np.expand_dims(\n                        np.concatenate([self.target[i] for i in range(len(self.target))]), 0)\n                    self.x_opt_acados = self.x_opt_acados.repeat(self.N, 0)\n                else:\n                    self.x_opt_acados = np.hstack(self.target)\n            if self.w_opt_acados is None:\n                if len(self.u_target.shape) == 1:\n                    self.w_opt_acados = self.u_target[np.newaxis]\n                    self.w_opt_acados = self.w_opt_acados.repeat(self.N, 0)\n                else:\n                    self.w_opt_acados = np.hstack(self.u_target)\n\n            gp_state = gp_regression_state if gp_regression_state is not None else initial_state\n            if not self.mlp_conf['approximated']:\n                self.acados_ocp_solver[use_model].set(0, 'p', np.hstack([np.array(gp_state + [1])]))\n                for j in range(1, self.N):\n                    self.acados_ocp_solver[use_model].set(j, 'p', np.hstack([np.array([0.0] * (len(gp_state) + 1))]))\n            else:\n                state = np.vstack([np.array([initial_state]), self.x_opt_acados[1:]])\n                a_list = []\n                for i in range(state.shape[0]):\n                    a_list.append(v_dot_q(np.array(state[i, 7:10]), quaternion_inverse(np.array(state[i, 3:7]))))\n                a = np.array(a_list)[:self.N]\n\n                if self.mlp_conf['torque_output']:\n                    a = np.concatenate([a, state[:self.N, 10:]], axis=-1)\n\n                if self.mlp_conf['u_inp']:\n                    a = np.concatenate([a, self.w_opt_acados], axis=-1)\n\n                if self.mlp_conf['ground_map_input']:\n                    ground_maps = []\n                    for i in range(state.shape[0]):\n                        pos = state[i][:3]\n                        x_idxs = np.floor((pos[0] - self._org_to_map_org[0]) / self._map_res).astype(int) - 1\n                        y_idxs = np.floor((pos[1] - self._org_to_map_org[1]) / self._map_res).astype(int) - 1\n                        ground_patch = self._static_ground_map[x_idxs:x_idxs + 3, y_idxs:y_idxs + 3]\n\n                        relative_ground_patch = 4 * (np.clip(pos[2] - ground_patch, 0, 0.5) - 0.25)\n\n                        flatten_relative_ground_patch = relative_ground_patch.flatten(order='F')\n\n                        ground_effect_in = np.hstack([flatten_relative_ground_patch,\n                                                      flatten_relative_ground_patch[..., :4] * 0])\n\n                        ground_maps.append(ground_effect_in)\n\n                    a = np.concatenate([a, np.array(ground_maps)[:self.N]], axis=-1)\n\n                mlp_params = self.mlp_regressor.approx_params(a, order=self.mlp_conf['approx_order'], flat=True)\n                mlp_params = np.vstack([mlp_params, mlp_params[[-1]]])\n                self.acados_ocp_solver[use_model].set(0, 'p',\n                                                      np.hstack([np.array(gp_state + [1]), mlp_params[0]]))\n                for j in range(1, self.N):\n                    self.acados_ocp_solver[use_model].set(j, 'p', np.hstack([np.array([0.0] * (len(gp_state) + 1)),\n                                                                             mlp_params[j]]))\n\n        # Solve OCP\n        self.acados_ocp_solver[use_model].solve()\n\n        # Get u\n        w_opt_acados = np.ndarray((self.N, 4))\n        x_opt_acados = np.ndarray((self.N + 1, len(x_init)))\n        x_opt_acados[0, :] = self.acados_ocp_solver[use_model].get(0, \"x\")\n        for i in range(self.N):\n            w_opt_acados[i, :] = self.acados_ocp_solver[use_model].get(i, \"u\")\n            x_opt_acados[i + 1, :] = self.acados_ocp_solver[use_model].get(i + 1, \"x\")\n\n        self.x_opt_acados = x_opt_acados.copy()\n        self.w_opt_acados = w_opt_acados.copy()\n\n        w_opt_acados = np.reshape(w_opt_acados, (-1))\n        return w_opt_acados if not return_x else (w_opt_acados, x_opt_acados)\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/__init__.py",
    "content": ""
  },
  {
    "path": "ros_dd_mpc/src/utils/animator.py",
    "content": "\"\"\" Class for generating a comprehensive post-processed visualization of experimental flight results.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import axes3d\nimport matplotlib.animation as animation\nfrom config.configuration_parameters import DirectoryConfig\n\n\nclass Dynamic3DTrajectory:\n    def __init__(self, pos_data, vel_data, pos_ref, vel_ref, t_vec_ref, legends, sparsing_factor=1):\n\n        # Add reference to data so it also moves\n        pos_data = pos_ref + pos_data\n        self.data = np.array(pos_data)\n\n        self.reference = pos_ref[0]\n        self.data_len = len(pos_data[0])\n        self.n_lines = len(pos_data)\n\n        self.t_vec = t_vec_ref - t_vec_ref[0]\n\n        self.legends = legends\n\n        vel_data = vel_ref + vel_data\n        self.vel_data = np.array(vel_data)\n\n        if sparsing_factor == 0:\n            sparse_data = np.arange(0, self.data_len)\n        else:\n            sparse_data = np.arange(0, self.data_len, sparsing_factor)\n\n        self.sparsed_data = self.data[:, sparse_data, :]\n        self.vel_data = self.vel_data[:, sparse_data, :]\n        self.t_vec = self.t_vec[sparse_data]\n\n        self.max_buffer_size = 50\n        self.colors = [\"tab:blue\", \"tab:orange\", \"tab:green\", \"tab:red\"]\n\n        self.data_len = len(sparse_data)\n\n        x_data = np.concatenate(tuple([dat[sparse_data, 0] for dat in pos_data]))\n        y_data = np.concatenate(tuple([dat[sparse_data, 1] for dat in pos_data]))\n        z_data = np.concatenate(tuple([dat[sparse_data, 2] for dat in pos_data]))\n\n        self.max_x = np.max(x_data)\n        self.min_x = np.min(x_data)\n        self.max_y = np.max(y_data)\n        self.min_y = np.min(y_data)\n        self.max_z = np.max(z_data)\n        self.min_z = np.min(z_data)\n\n        # Make dimensions more similar and add a bit of padding\n        range_x = self.max_x - self.min_x\n        range_y = self.max_y - self.min_y\n        range_z = self.max_z - self.min_z\n        max_range = max(range_x, range_y, range_z)\n        self.max_x = self.max_x + 0.25 * (max_range - range_x) + (self.max_x - self.min_x) * 0.2\n        self.min_x = self.min_x - 0.25 * (max_range - range_x) - (self.max_x - self.min_x) * 0.2\n        self.max_y = self.max_y + 0.25 * (max_range - range_y) + (self.max_y - self.min_y) * 0.2\n        self.min_y = self.min_y - 0.25 * (max_range - range_y) - (self.max_y - self.min_y) * 0.2\n        self.max_z = self.max_z + 0.25 * (max_range - range_z) + (self.max_z - self.min_z) * 0.2\n        self.min_z = self.min_z - 0.25 * (max_range - range_z) - (self.max_z - self.min_z) * 0.2\n\n        self.figure = None\n        self.ax = None\n        self.pos_err_ax = None\n        self.speed_ax = None\n        self.top_down_ax = None\n        self.x_time_ax = None\n        self.y_time_ax = None\n\n        self.lines = None\n\n        self.n_3d_lines = None\n        self.vel_bars_0_idx = None\n        self.pos_err_0_idx = None\n        self.top_down_0_idx = None\n        self.x_time_0_idx = None\n        self.y_time_0_idx = None\n\n    def on_launch(self):\n        self.figure = plt.figure(figsize=(14, 10))\n\n        self.ax = axes3d.Axes3D(self.figure, rect=(-0.02, 0.3, 0.65, 0.7))\n\n        self.ax.set_zlim3d([self.min_z, self.max_z])\n        self.ax.set_ylim3d([self.min_y, self.max_y])\n        self.ax.set_xlim3d([self.min_x, self.max_x])\n        self.ax.set_xlabel(r'$p_x\\: [m]$')\n        self.ax.set_ylabel(r'$p_y\\: [m]$')\n        self.ax.set_zlabel(r'$p_z\\: [m]$')\n\n        self.ax.plot(self.reference[:, 0], self.reference[:, 1], self.reference[:, 2], '-', alpha=0.2)\n\n        self.lines = sum([self.ax.plot([], [], [], '-', c=self.colors[i], label=self.legends[i])\n                          for i in range(self.n_lines)], [])\n        projection_lines = sum([self.ax.plot([], [], [], '-', c=self.colors[i], alpha=0.2)\n                                for i in range(self.n_lines)], [])\n        projection_lines += sum([self.ax.plot([], [], [], '-', c=self.colors[i], alpha=0.2)\n                                 for i in range(self.n_lines)], [])\n        projection_lines += sum([self.ax.plot([], [], [], '-', c=self.colors[i], alpha=0.2)\n                                 for i in range(self.n_lines)], [])\n\n        self.lines += projection_lines\n\n        pos_balls = sum([self.ax.plot([], [], [], 'o', c=self.colors[i]) for i in range(self.n_lines)], [])\n        self.lines += pos_balls\n        self.n_3d_lines = len(self.lines)\n\n        self.ax.legend()\n        self.ax.set_title('3D Visualization')\n\n        self.vel_bars_0_idx = len(self.lines)\n        bar_height = 0.1\n\n        self.speed_ax = plt.axes((0.65, 0.8, 0.3, 0.05))\n        vel_bar = [self.speed_ax.barh([0], [0], color=self.colors[0], height=bar_height)[0]]\n        self.speed_ax.set_xlim([0, np.max(np.sqrt(np.sum(self.vel_data[0] ** 2, -1))) * 1.05])\n        self.speed_ax.set_ylim([-bar_height, bar_height * 1.05])\n\n        self.speed_ax.set_xlabel(r'$\\|\\mathbf{v}\\|\\:[m/s]$')\n        self.speed_ax.set_yticks([0])\n        self.speed_ax.set_yticklabels([self.legends[0]])\n        self.speed_ax.grid()\n        self.speed_ax.set_title('Quadrotor speed')\n\n        self.lines += vel_bar\n\n        self.pos_err_ax = plt.axes((0.65, 0.6, 0.3, 0.1))\n        self.pos_err_ax.grid()\n        pos_err = []\n        p_errors = np.stack([self.data[0] - self.data[i+1] for i in range(self.n_lines - 1)])\n\n        pos_err += [self.pos_err_ax.barh([(i - 1) * bar_height * 1.05], [i], color=self.colors[i], height=bar_height)[0]\n                    for i in range(1, self.n_lines)]\n        self.pos_err_ax.set_xlim([np.min(p_errors) * 1.05, np.max(p_errors) * 1.05])\n        self.pos_err_ax.set_ylim([-bar_height, ((self.n_lines - 1) * 1.05) * bar_height])\n        self.pos_err_ax.set_title('XY position error')\n        self.pos_err_ax.set_xlabel(r'$\\|\\mathbf{p}^* - \\mathbf{p}\\|\\: [m]$')\n        self.pos_err_ax.set_yticks([i * bar_height * 1.05 for i in range(self.n_lines - 1)])\n        self.pos_err_ax.set_yticklabels(self.legends[1:])\n\n        self.pos_err_0_idx = len(self.lines)\n        self.lines += pos_err\n\n        self.top_down_ax = plt.axes((0.65, 0.08, 0.3, 0.4))\n        top_down_lines = sum([self.top_down_ax.plot([], [], color=self.colors[i], label=self.legends[i])\n                              for i in range(self.n_lines)], [])\n        top_down_lines += sum([self.top_down_ax.plot([], [], 'o', color=self.colors[i])\n                               for i in range(self.n_lines)], [])\n        self.top_down_ax.plot(self.reference[:, 0], self.reference[:, 1], '-', alpha=0.2)\n        self.top_down_0_idx = len(self.lines)\n        self.top_down_ax.grid()\n        self.top_down_ax.set_xlim([self.min_x, self.max_x])\n        self.top_down_ax.set_ylim([self.min_y, self.max_y])\n        self.top_down_ax.set_title('Top down view')\n        self.top_down_ax.set_xlabel(r'$p_x\\:[m]$')\n        self.top_down_ax.set_ylabel(r'$p_y\\:[m]$')\n        self.top_down_ax.legend()\n        self.lines += top_down_lines\n\n        self.x_time_ax = plt.axes((0.05, 0.08, 0.26, 0.2))\n        self.x_time_ax.axhline(y=0, linestyle='--', alpha=0.5)\n        x_time_lines = sum([self.x_time_ax.plot([], [], color=self.colors[i], label=self.legends[i])\n                            for i in range(1, self.n_lines)], [])\n        self.x_time_0_idx = len(self.lines)\n        self.lines += x_time_lines\n        self.x_time_ax.set_xlim([self.t_vec[0], self.t_vec[-1]])\n        self.x_time_ax.set_ylim([0, np.max(np.abs(p_errors[:, :, 0])) * 1.05])\n        self.x_time_ax.grid()\n        self.x_time_ax.set_title(r'$p_x$ error')\n        self.x_time_ax.set_ylabel('pos [m]')\n        self.x_time_ax.legend(loc='upper right')\n\n        self.y_time_ax = plt.axes((0.34, 0.08, 0.26, 0.2))\n        self.y_time_ax.axhline(y=0, linestyle='--', alpha=0.5)\n        y_time_lines = sum([self.y_time_ax.plot([], [], color=self.colors[i], label=self.legends[i])\n                            for i in range(1, self.n_lines)], [])\n        self.y_time_0_idx = len(self.lines)\n        self.lines += y_time_lines\n        self.y_time_ax.set_xlim([self.t_vec[0], self.t_vec[-1]])\n        self.y_time_ax.set_ylim([0, np.max(np.abs(p_errors[:, :, 1])) * 1.05])\n        self.y_time_ax.set_xlabel('Time [s]')\n        self.y_time_ax.set_title(r'$p_y$ error')\n        self.y_time_ax.grid()\n        self.y_time_ax.legend(loc='upper right')\n\n    def on_init(self):\n        for i in range(self.n_3d_lines):\n            self.lines[i].set_data([], [])\n            self.lines[i].set_3d_properties([])\n\n        self.lines[self.vel_bars_0_idx].set_width(0)\n\n        for i in range(self.n_lines - 1):\n            self.lines[self.pos_err_0_idx + i].set_width(0)\n\n        for i in range(self.n_lines * 2):\n            self.lines[self.top_down_0_idx + i].set_data([], [])\n\n        for i in range(self.n_lines - 1):\n            self.lines[self.x_time_0_idx + i].set_data([], [])\n            self.lines[self.y_time_0_idx + i].set_data([], [])\n\n        return self.lines\n\n    def animate(self, i):\n        #i = (2 * i) % self.data.shape[1]\n        for j, (line, xi) in enumerate(zip(self.lines[:self.n_lines], self.sparsed_data)):\n            x, y, z = xi[:i].T\n\n            if len(x) > self.max_buffer_size:\n                x = x[len(x) - self.max_buffer_size:]\n                y = y[len(y) - self.max_buffer_size:]\n                z = z[len(z) - self.max_buffer_size:]\n\n            line.set_data(x, y)\n            line.set_3d_properties(z)\n\n            self.lines[j + self.n_lines].set_data(x, self.max_y)\n            self.lines[j + self.n_lines].set_3d_properties(z)\n\n            self.lines[j + 2 * self.n_lines].set_data(np.ones(len(y)) * self.min_x, y)\n            self.lines[j + 2 * self.n_lines].set_3d_properties(z)\n\n            self.lines[j + 3 * self.n_lines].set_data(x, y)\n            self.lines[j + 3 * self.n_lines].set_3d_properties(self.min_z)\n\n            if len(x) > 0:\n                self.lines[j + 4 * self.n_lines].set_data([x[-1]], [y[-1]])\n                self.lines[j + 4 * self.n_lines].set_3d_properties([z[-1]])\n\n                self.lines[self.top_down_0_idx + j + self.n_lines].set_data(x[-1], y[-1])\n\n            self.lines[self.top_down_0_idx + j].set_data(x, y)\n\n        self.lines[self.vel_bars_0_idx].set_width(np.sqrt(np.sum(self.vel_data[0, i, :] ** 2)))\n\n        for j in range(self.n_lines - 1):\n            self.lines[self.pos_err_0_idx + j].set_width(\n                np.sqrt(np.sum((self.data[0][i, :2] - self.data[j+1][i, :2]) ** 2)))\n\n        for j in reversed(range(self.n_lines - 1)):\n            self.lines[self.x_time_0_idx + j].set_data(self.t_vec[:i], np.abs(self.data[0, :i, 0] - self.data[j + 1, :i, 0]))\n            self.lines[self.y_time_0_idx + j].set_data(self.t_vec[:i], np.abs(self.data[0, :i, 1] - self.data[j + 1, :i, 1]))\n\n        return self.lines\n\n    def __call__(self, save=False):\n        self.on_launch()\n\n        ani = animation.FuncAnimation(self.figure, self.animate, init_func=self.on_init, frames=self.data_len,\n                                      interval=20, blit=True, repeat=False)\n\n        if save:\n            results_dir = DirectoryConfig.RESULTS_DIR\n            plt.rcParams['animation.ffmpeg_path'] = '/usr/local/bin/ffmpeg'\n            writer = animation.FFMpegWriter(fps=50)\n            ani.save(results_dir + '/animation.mp4', writer=writer)\n\n        else:\n            plt.show()\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/ground_map.py",
    "content": "import numpy as np\n\n\nclass GroundMap:\n    def __init__(self, horizon=((-1, 1), (-1, 1)), resolution=0.1):\n        assert ((horizon[0][1] - horizon[0][0]) / resolution).is_integer()\n        assert ((horizon[1][1] - horizon[1][0]) / resolution).is_integer()\n        self._horizon = horizon\n        self._resolution = resolution\n\n    def at(self, center: np.array):\n        map_center_to_map_origin = np.array([self._horizon[0][0], self._horizon[1][0]])\n        map_origin = center + map_center_to_map_origin\n        return self.draw(map_origin - 0.5 * self._resolution, self.empty_map), map_origin\n\n    def draw(self, pos, map):\n        raise NotImplementedError\n\n    @property\n    def empty_map(self):\n        x_len = int((self._horizon[0][1] - self._horizon[0][0]) / self._resolution) + 1\n        y_len = int((self._horizon[1][1] - self._horizon[1][0]) / self._resolution) + 1\n        empty_map = np.zeros((x_len, y_len))\n        return empty_map\n\n\nclass GroundMapWithBox(GroundMap):\n    def __init__(self, box_min, box_max, height, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self._box_min = box_min\n        self._box_max = box_max\n        self._height = height\n\n    def draw(self, pos, map):\n        x_from = max(int((self._box_min[0] - pos[0]) // self._resolution), 0)\n        x_to = min(int((self._box_max[0] - pos[0]) // self._resolution), map.shape[0] - 1)\n        y_from = max(int((self._box_min[1] - pos[1]) // self._resolution), 0)\n        y_to = min(int((self._box_max[1] - pos[1]) // self._resolution), map.shape[1] - 1)\n        map[x_from:x_to, y_from:y_to] = self._height\n        return map\n\nif __name__ == '__main__':\n    horizon = ((-7, 7), (-7, 7))\n    box_min_ = (-4.25, 9.37)\n    box_max_ = (-2.76, 10.13)\n    box_height_ = 1.0\n\n    map = GroundMapWithBox(box_min_, box_max_, box_height_, horizon=horizon)\n\n    map_at, o = map.at(np.array([3, 9]))\n\n    print(map_at)\n\n    m_x = int((1.01 + o[0]) / 0.1)\n\n    print(m_x)"
  },
  {
    "path": "ros_dd_mpc/src/utils/keyframe_3d_gen.py",
    "content": "\"\"\" Generates a set of keypoints to generate a piece-wise polynomial trajectory between each pair.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import ExpSineSquared\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef apply_map_limits(x, y, z, limits):\n\n    # Find out which axis is the most constrained\n    x_max_range = limits[\"x\"][1] - limits[\"x\"][0]\n    y_max_range = limits[\"y\"][1] - limits[\"y\"][0]\n    z_max_range = limits[\"z\"][1] - limits[\"z\"][0]\n\n    x_actual_range = np.max(x) - np.min(x)\n    y_actual_range = np.max(y) - np.min(y)\n    z_actual_range = np.max(z) - np.min(z)\n\n    # One or more of the ranges violates the constraints.\n    if x_actual_range > x_max_range or y_actual_range > y_max_range or z_actual_range > z_max_range:\n        shrink_ratio = max(x_actual_range / x_max_range, y_actual_range / y_max_range, z_actual_range / z_max_range)\n        x = (x - np.mean(x)) / shrink_ratio\n        y = (y - np.mean(y)) / shrink_ratio\n        z = (z - np.mean(z)) / shrink_ratio\n\n        x += (limits[\"x\"][0] - np.min(x))\n        y += (limits[\"y\"][0] - np.min(y))\n        z += (limits[\"z\"][0] - np.min(z))\n\n    return x, y, z\n\n\ndef center_and_scale(x, y, z, x_max, x_min, y_max, y_min, z_max, z_min):\n\n    x -= (x_min + (x_max - x_min) / 2)\n    y -= (y_min + (y_max - y_min) / 2)\n    z -= (z_min + (z_max - z_min) / 2)\n\n    scaling = np.mean([x_max - x_min, y_max - y_min, z_max - z_min])\n    x = x * 6 / scaling\n    y = y * 6 / scaling\n    z = z * 6 / scaling\n\n    return x, y, z\n\n\ndef random_periodical_trajectory(plot=False, random_state=None, map_limits=None):\n\n    if random_state is None:\n        random_state = np.random.randint(0, 9999)\n\n    kernel_z = ExpSineSquared(length_scale=5.5, periodicity=60)\n    kernel_y = ExpSineSquared(length_scale=4.5, periodicity=30) + ExpSineSquared(length_scale=4.0, periodicity=15)\n    kernel_x = ExpSineSquared(length_scale=4.5, periodicity=30) + ExpSineSquared(length_scale=4.5, periodicity=60)\n\n    gp_x = GaussianProcessRegressor(kernel=kernel_x)\n    gp_y = GaussianProcessRegressor(kernel=kernel_y)\n    gp_z = GaussianProcessRegressor(kernel=kernel_z)\n\n    # High resolution sampling for track boundaries\n    inputs_x = np.linspace(0, 60, 100)\n    inputs_y = np.linspace(0, 30, 100)\n    inputs_z = np.linspace(0, 60, 100)\n\n    x_sample_hr = gp_x.sample_y(inputs_x[:, np.newaxis], 1, random_state=random_state)\n    y_sample_hr = gp_y.sample_y(inputs_y[:, np.newaxis], 1, random_state=random_state)\n    z_sample_hr = gp_z.sample_y(inputs_z[:, np.newaxis], 1, random_state=random_state)\n\n    max_x_coords = np.max(x_sample_hr, 0)\n    max_y_coords = np.max(y_sample_hr, 0)\n    max_z_coords = np.max(z_sample_hr, 0)\n\n    min_x_coords = np.min(x_sample_hr, 0)\n    min_y_coords = np.min(y_sample_hr, 0)\n    min_z_coords = np.min(z_sample_hr, 0)\n\n    x_sample_hr, y_sample_hr, z_sample_hr = center_and_scale(\n        x_sample_hr, y_sample_hr, z_sample_hr,\n        max_x_coords, min_x_coords, max_y_coords, min_y_coords, max_z_coords, min_z_coords)\n\n    # Additional constraint on map limits\n    if map_limits is not None:\n        x_sample_hr, y_sample_hr, z_sample_hr = apply_map_limits(x_sample_hr, y_sample_hr, z_sample_hr, map_limits)\n\n    # Low resolution for control points\n    lr_ind = np.linspace(0, len(x_sample_hr) - 1, 10, dtype=int)\n    lr_ind[-1] = 0\n    x_sample_lr = x_sample_hr[lr_ind, :]\n    y_sample_lr = y_sample_hr[lr_ind, :]\n    z_sample_lr = z_sample_hr[lr_ind, :]\n\n    x_sample_diff = x_sample_hr[lr_ind + 1, :] - x_sample_lr\n    y_sample_diff = y_sample_hr[lr_ind + 1, :] - y_sample_lr\n    z_sample_diff = z_sample_hr[lr_ind + 1, :] - z_sample_lr\n\n    # Get angles at keypoints\n    a_x = np.arctan2(z_sample_diff, y_sample_diff) * 0\n    a_y = np.arctan2(x_sample_diff, z_sample_diff) * 0\n    a_z = (np.arctan2(y_sample_diff, x_sample_diff) - np.pi/4) * 0\n\n    if plot:\n        # Plot checking\n        # Build rotation matrices\n        rx = [np.array([[1, 0, 0], [0, np.cos(a), -np.sin(a)], [0, np.sin(a), np.cos(a)]]) for a in a_x[:, 0]]\n        ry = [np.array([[np.cos(a), 0, np.sin(a)], [0, 1, 0], [-np.sin(a), 0, np.cos(a)]]) for a in a_y[:, 0]]\n        rz = [np.array([[np.cos(a), -np.sin(a), 0], [np.sin(a), np.cos(a), 0], [0, 0, 1]]) for a in a_z[:, 0]]\n\n        main_axes = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])\n        quiver_axes = np.zeros((len(lr_ind), 3, 3))\n\n        for i in range(len(lr_ind)):\n            r_mat = rz[i].dot(ry[i].dot(rx[i]))\n            rot_body = r_mat.dot(main_axes)\n            quiver_axes[i, :, :] = rot_body\n\n        shortest_axis = min(np.max(x_sample_hr) - np.min(x_sample_hr),\n                            np.max(y_sample_hr) - np.min(y_sample_hr),\n                            np.max(z_sample_hr) - np.min(z_sample_hr))\n        fig = plt.figure()\n        ax = fig.add_subplot(111, projection='3d')\n        ax.scatter(x_sample_lr, y_sample_lr, z_sample_lr)\n        ax.plot(x_sample_hr[:, 0], y_sample_hr[:, 0], z_sample_hr[:, 0], '-', alpha=0.5)\n        ax.quiver(x_sample_lr[:, 0], y_sample_lr[:, 0], z_sample_lr[:, 0],\n                  x_sample_diff[:, 0], y_sample_diff[:, 0], z_sample_diff[:, 0], color='g',\n                  length=shortest_axis / 6, normalize=True, label='traj. norm')\n        ax.quiver(x_sample_lr, y_sample_lr, z_sample_lr,\n                  quiver_axes[:, 0, :], quiver_axes[:, 1, :], quiver_axes[:, 2, :], color='b',\n                  length=shortest_axis / 6, normalize=True, label='quad. att.')\n        ax.tick_params(labelsize=16)\n        ax.legend(fontsize=16)\n        ax.set_xlabel('x [m]', size=16, labelpad=10)\n        ax.set_ylabel('y [m]', size=16, labelpad=10)\n        ax.set_zlabel('z [m]', size=16, labelpad=10)\n        ax.set_xlim([np.min(x_sample_hr), np.max(x_sample_hr)])\n        ax.set_ylim([np.min(y_sample_hr), np.max(y_sample_hr)])\n        ax.set_zlim([np.min(z_sample_hr), np.max(z_sample_hr)])\n        ax.set_title('Source keypoints', size=18)\n        plt.show()\n\n    curve = np.concatenate((x_sample_lr, y_sample_lr, z_sample_lr), 1)\n    attitude = np.concatenate((a_x, a_y, a_z), 1)\n\n    return curve, attitude\n\n\nif __name__ == \"__main__\":\n    limits = {\n        \"x\": [-0.6, 4],\n        \"y\": [-2, 2],\n        \"z\": [0.1, 2]\n    }\n    random_periodical_trajectory(plot=True, map_limits=limits)\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/quad_3d_opt_utils.py",
    "content": "\"\"\" Set of utility functions for the quadrotor optimizer and simplified simulator.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport casadi as cs\nimport numpy as np\nfrom src.quad_mpc.quad_3d import Quadrotor3D\nfrom tqdm import tqdm\n\n\ndef discretize_dynamics_and_cost(t_horizon, n_points, m_steps_per_point, x, u, dynamics_f, cost_f, ind):\n    \"\"\"\n    Integrates the symbolic dynamics and cost equations until the time horizon using a RK4 method.\n    :param t_horizon: time horizon in seconds\n    :param n_points: number of control input points until time horizon\n    :param m_steps_per_point: number of integrations steps per control input\n    :param x: 4-element list with symbolic vectors for position (3D), angle (4D), velocity (3D) and rate (3D)\n    :param u: 4-element symbolic vector for control input\n    :param dynamics_f: symbolic dynamics function written in CasADi symbolic syntax.\n    :param cost_f: symbolic cost function written in CasADi symbolic syntax. If None, then cost 0 is returned.\n    :param ind: Only used for trajectory tracking. Index of cost function to use.\n    :return: a symbolic function that computes the dynamics integration and the cost function at n_control_inputs\n    points until the time horizon given an initial state and\n    \"\"\"\n\n    if isinstance(cost_f, list):\n        # Select the list of cost functions\n        cost_f = cost_f[ind * m_steps_per_point:(ind + 1) * m_steps_per_point]\n    else:\n        cost_f = [cost_f] * m_steps_per_point\n\n    # Fixed step Runge-Kutta 4 integrator\n    dt = t_horizon / n_points / m_steps_per_point\n    x0 = x\n    q = 0\n\n    for j in range(m_steps_per_point):\n        k1 = dynamics_f(x=x, u=u)['x_dot']\n        k2 = dynamics_f(x=x + dt / 2 * k1, u=u)['x_dot']\n        k3 = dynamics_f(x=x + dt / 2 * k2, u=u)['x_dot']\n        k4 = dynamics_f(x=x + dt * k3, u=u)['x_dot']\n        x_out = x + dt / 6 * (k1 + 2 * k2 + 2 * k3 + k4)\n\n        x = x_out\n\n        if cost_f and cost_f[j] is not None:\n            q = q + cost_f[j](x=x, u=u)['q']\n\n    return cs.Function('F', [x0, u], [x, q], ['x0', 'p'], ['xf', 'qf'])\n\n\ndef _forward_prop_core(x_0, u_seq, t_horizon, discrete_dynamics_f, dynamics_jac_f, B_x, gp_ensemble, covar, dt,\n                       m_int_steps, use_model):\n    \"\"\"\n    Propagates forward the state estimate described by the mean vector x_0 and the covariance matrix covar, and a\n    sequence of inputs for the system u_seq. These inputs can either be numerical or symbolic.\n\n    :param x_0: initial mean state of the state probability density function. Vector of length m\n    :param u_seq: sequence of flattened N control inputs. I.e. vector of size N*4\n    :param t_horizon: time horizon corresponding to sequence of inputs\n    :param discrete_dynamics_f: symbolic function to compute the discrete dynamics of the system.\n    :param dynamics_jac_f: symbolic function to compute the  dynamics jacobian of the system.\n    :param B_x: Matrix to convert map from regressor output to state.\n    :param gp_ensemble: The ensemble of GP's. Can be None if no GP's are used.\n    :param covar: Initial covariance matrix of shape m x m, of the state probability density function.\n    :param dt: Vector of N timestamps, the same length as w_opt / 2, corresponding to the time each input is applied.\n    :param m_int_steps: number of intermediate integration steps per control node.\n    :param use_model: The number (index) of the dynamics model to use from the available options.\n    :return: The sequence of mean and covariance estimates for every corresponding input, as well as the computed\n    cost for each stage.\n    \"\"\"\n\n    # Reshape input sequence to a N x 4 (1D) vector. Assume control input dim = 4\n    k = np.arange(int(u_seq.shape[0] / 4))\n    input_sequence = cs.horzcat(u_seq[4 * k], u_seq[4 * k + 1], u_seq[4 * k + 2], u_seq[4 * k + 3])\n\n    N = int(u_seq.shape[0] / 4)\n\n    if dt is None:\n        dt = t_horizon / N * np.ones((N, 1))\n\n    if len(dt.shape) == 1:\n        dt = np.expand_dims(dt, 1)\n\n    # Initialize sequence of propagated states\n    mu_x = [x_0]\n    cov_x = [covar]\n    cost_x = [0]\n\n    for k in range(N):\n\n        # Get current control input and current state mean and covariance\n        u_k = input_sequence[k, :]\n        mu_k = mu_x[k]\n        sig_k = cov_x[k]\n\n        # mu(k+1) vector from propagation equations. Pass state through nominal dynamics with GP mean augmentation if GP\n        # is available. Otherwise use nominal dynamics only.\n        f_func = discrete_dynamics_f(dt[k], 1, m_int_steps, k, use_gp=gp_ensemble is not None, use_model=use_model)\n\n        fk = f_func(x0=mu_k, p=u_k)\n        mu_next = fk['xf']\n        stage_cost = fk['qf']\n\n        # K(k+1) matrix from propagation equations\n        K_mat = sig_k\n\n        # Evaluate linearized dynamics at current state\n        l_mat = dynamics_jac_f(mu_k, u_k) * dt[k] + np.eye(mu_k.shape[0])\n\n        # Adjust matrices if GP model available.\n        if gp_ensemble is not None:\n\n            raise NotImplementedError  # TODO: re-implement covariance propagation with GP\n            # Get subset of features for GP regressor if GP regressor available\n            z_k = cs.mtimes(B_z, cs.vertcat(mu_k, u_k.T))\n\n            # GP prediction. Stack predictions vertically along prediction dimension (e.g. vx, vy, ...)\n            _, gp_covar_preds, gp_noise_prior = gp_ensemble.predict(z_k, return_cov=True, gp_idx=use_model)\n\n            # Covariance forward propagation.\n            l_mat = cs.horzcat(l_mat, B_x * dt[k])  # left-multiplying matrix\n            sig_ld_comp = cs.mtimes(gp_prediction_jac(z_k, B_x, B_z, gp_ensemble, use_model), sig_k)\n            K_mat = cs.vertcat(K_mat, sig_ld_comp)\n            K_mat = cs.horzcat(K_mat, cs.vertcat(sig_ld_comp.T, cs.diag(gp_covar_preds + gp_noise_prior)))\n\n        # Add next state estimate to lists\n        mu_x += [mu_next]\n        cov_x += [cs.mtimes(cs.mtimes(l_mat, K_mat), l_mat.T)]\n        cost_x += [stage_cost]\n\n    cov_x = [cov for cov in cov_x]\n    return mu_x, cov_x, cost_x\n\n\ndef uncertainty_forward_propagation(x_0, u_seq, t_horizon, discrete_dynamics_f, dynamics_jac_f, B_x=None,\n                                    gp_regressors=None, covar=None, dt=None, m_integration_steps=1, use_model=0):\n    if covar is None:\n        covar = np.zeros((len(x_0), len(x_0)))\n    else:\n        assert covar.shape == (len(x_0), len(x_0))\n\n    x_0 = np.array(x_0)\n\n    mu_x, cov_x, _ = _forward_prop_core(x_0, u_seq, t_horizon, discrete_dynamics_f, dynamics_jac_f, B_x,\n                                        gp_regressors, covar, dt, m_integration_steps, use_model)\n\n    mu_x = cs.horzcat(*mu_x)\n    cov_x = cs.horzcat(*cov_x)\n\n    mu_prop = np.array(mu_x).T\n    cov_prop = np.array(cov_x).reshape((mu_prop.shape[1], mu_prop.shape[1], -1), order='F').transpose(2, 0, 1)\n    return mu_prop, cov_prop\n\n\ndef gp_prediction_jac(z, Bx, Bz, gp_ensemble, gp_idx):\n    \"\"\"\n    Computes the symbolic function for the Jacobian of the expected values of the joint GP regression model,\n    evaluated at point z.\n\n    :param z: A dictionary of symbolic vector at which the Jacobian must be evaluated. Each entry in the dictionary is\n    indexed by the output dimension index. The dimension m_z on any given value must be the expected dimension of the\n    feature inputs for the regressor.\n    :param Bx: Matrix to convert map from regressor output to state.\n    :param Bz: Matrix to map from (stacked) state vector and input vector to feature vector. If the gp_ensemble is not\n    homogeneous, this parameter must be a dictionary specifying in each dimension which Bz matrix to use.\n    :param gp_ensemble: GPEnsemble object with all the gp regressors\n    :param gp_idx: which gp to compute the jacobian to from the ensemble\n    :return: A Jacobian matrix of size n x m_x, where n is the number of variables regressed by the joint GP\n    regressor and m_x is the dimension of the state vector.\n    \"\"\"\n\n    # Figure out which variables are necessary to compute the Jacobian wrt.\n    out_dims = np.matmul(Bx, np.ones((Bx.shape[1], 1)))\n    if isinstance(Bz, dict):\n        z_dims = {}\n        for dim in Bz.keys():\n            z_dims[dim] = np.matmul(Bz[dim][:, :len(out_dims)], out_dims)\n    else:\n        bz = np.matmul(Bz[:, :len(out_dims)], out_dims)\n        z_dims = {k: v for k, v in zip(z.keys(), [bz] * len(z.keys()))}\n        Bz = {k: v for k, v in zip(z.keys(), [Bz] * len(z.keys()))}\n\n    jac = []\n    for dim in gp_idx.keys():\n        # Mapping from z to x\n        inv_Bz = Bz[dim][:, :len(out_dims)].T\n\n        gp = gp_ensemble.gp[dim][gp_idx[dim][0]]\n        jac += [cs.mtimes(inv_Bz, gp.eval_gp_jac(z[dim]) * z_dims[dim])]\n\n    return cs.horzcat(*jac).T\n\n\ndef simulate_plant(quad, w_opt, simulation_dt, simulate_func, t_horizon=None, dt_vec=None, progress_bar=False):\n    \"\"\"\n    Given a sequence of n inputs, evaluates the simulated discrete-time plant model n steps into the future. The\n    current drone state will not be changed by calling this method.\n    :param quad: Quadrotor3D simulator object\n    :type quad: Quadrotor3D\n    :param w_opt: sequence of control n x m control inputs, where n is the number of steps and m is the\n    dimensionality of a control input.\n    :param simulation_dt: simulation step\n    :param simulate_func: simulation function (inner loop) from the quadrotor MPC.\n    :param t_horizon: time corresponding to the duration of the n control inputs. In the case that the w_opt comes\n    from an MPC optimization, this parameter should be the MPC time horizon.\n    :param dt_vec: a vector of timestamps, the same length as w_opt, corresponding to the total time each input is\n    applied.\n    :param progress_bar: boolean - whether to show a progress bar on the console or not.\n    :return: the sequence of simulated quadrotor states.\n    \"\"\"\n\n    # Default_parameters\n    if t_horizon is None and dt_vec is None:\n        raise ValueError(\"At least the t_horizon or dt should be provided\")\n\n    if t_horizon is None:\n        t_horizon = np.sum(dt_vec)\n\n    state_safe = quad.get_state(quaternion=True, stacked=True)\n\n    # Compute true simulated trajectory\n    total_sim_time = t_horizon\n    sim_traj = []\n\n    if dt_vec is None:\n        change_control_input = np.arange(0, t_horizon, t_horizon / (w_opt.shape[0]))[1:]\n        first_dt = t_horizon / w_opt.shape[0]\n        sim_traj.append(quad.get_state(quaternion=True, stacked=True))\n    else:\n        change_control_input = np.cumsum(dt_vec)\n        first_dt = dt_vec[0]\n\n    t_start_ep = 1e-6\n    int_range = tqdm(np.arange(t_start_ep, total_sim_time, simulation_dt)) if progress_bar else \\\n        np.arange(t_start_ep, total_sim_time, simulation_dt)\n\n    current_ind = 0\n    past_ind = 0\n    for t_elapsed in int_range:\n        ref_u = w_opt[current_ind, :].T\n        simulate_func(ref_u)\n        if t_elapsed + simulation_dt >= first_dt:\n            current_ind = np.argwhere(change_control_input <= t_elapsed + simulation_dt)[-1, 0] + 1\n            if past_ind != current_ind:\n                sim_traj.append(quad.get_state(quaternion=True, stacked=True))\n                past_ind = current_ind\n\n    if dt_vec is None:\n        sim_traj.append(quad.get_state(quaternion=True, stacked=True))\n    sim_traj = np.squeeze(sim_traj)\n\n    quad.set_state(state_safe)\n\n    return sim_traj\n\n\ndef get_reference_chunk(reference_traj, reference_u, current_idx, n_mpc_nodes, reference_over_sampling):\n    \"\"\"\n    Extracts the reference states and controls for the current MPC optimization given the over-sampled counterparts.\n\n    :param reference_traj: The reference trajectory, which has been finely over-sampled by a factor of\n    reference_over_sampling. It should be a vector of shape (Nx13), where N is the length of the trajectory in samples.\n    :param reference_u: The reference controls, following the same requirements as reference_traj. Should be a vector\n    of shape (Nx4).\n    :param current_idx: Current index of the trajectory tracking. Should be an integer number between 0 and N-1.\n    :param n_mpc_nodes: Number of MPC nodes considered in the optimization.\n    :param reference_over_sampling: The over-sampling factor of the reference trajectories. Should be a positive\n    integer.\n    :return: Returns the chunks of reference selected for the current MPC iteration. Two numpy arrays will be returned:\n        - An ((N+1)x13) array, corresponding to the reference trajectory. The first row is the state of current_idx.\n        - An (Nx4) array, corresponding to the reference controls.\n    \"\"\"\n\n    # Dense references\n    ref_traj_chunk = reference_traj[current_idx:current_idx + (n_mpc_nodes + 1) * reference_over_sampling, :]\n    ref_u_chunk = reference_u[current_idx:current_idx + n_mpc_nodes * reference_over_sampling, :]\n\n    # Indices for down-sampling the reference to number of MPC nodes\n    downsample_ref_ind = np.arange(0, min(reference_over_sampling * (n_mpc_nodes + 1), ref_traj_chunk.shape[0]),\n                                   reference_over_sampling, dtype=int)\n\n    # Sparser references (same dt as node separation)\n    ref_traj_chunk = ref_traj_chunk[downsample_ref_ind, :]\n    ref_u_chunk = ref_u_chunk[downsample_ref_ind[:max(len(downsample_ref_ind) - 1, 1)], :]\n\n    return ref_traj_chunk, ref_u_chunk\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/trajectories.py",
    "content": "\"\"\" Trajectory generation functions. For the circle, lemniscate and random trajectories.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport numpy as np\nfrom src.utils.utils import undo_quaternion_flip, rotation_matrix_to_quat\nfrom src.utils.utils import quaternion_inverse, q_dot_q\nfrom src.utils.trajectory_generator import draw_poly, get_full_traj, fit_multi_segment_polynomial_trajectory\nfrom src.utils.keyframe_3d_gen import random_periodical_trajectory\nfrom config.configuration_parameters import DirectoryConfig\nfrom src.quad_mpc.quad_3d import Quadrotor3D\nimport matplotlib.pyplot as plt\nimport os\nimport yaml\nimport json\n\n\ndef check_trajectory(trajectory, inputs, tvec, plot=False):\n    \"\"\"\n\n    @param trajectory:\n    @param inputs:\n    @param tvec:\n    @param plot:\n    @return:\n    \"\"\"\n\n    print(\"Checking trajectory integrity...\")\n\n    dt = np.expand_dims(np.gradient(tvec, axis=0), axis=1)\n    numeric_derivative = np.gradient(trajectory, axis=0) / dt\n\n    errors = np.zeros((dt.shape[0], 3))\n\n    num_bodyrates = []\n\n    for i in range(dt.shape[0]):\n        # 1) check if velocity is consistent with position\n        numeric_velocity = numeric_derivative[i, 0:3]\n        analytic_velocity = trajectory[i, 7:10]\n        errors[i, 0] = np.linalg.norm(numeric_velocity - analytic_velocity)\n        if not np.allclose(analytic_velocity, numeric_velocity, atol=1e-2, rtol=1e-2):\n            print(\"inconsistent linear velocity\")\n            print(numeric_velocity)\n            print(analytic_velocity)\n            return False\n\n        # 2) check if attitude is consistent with acceleration\n        gravity = 9.81\n        numeric_thrust = numeric_derivative[i, 7:10] + np.array([0.0, 0.0, gravity])\n        numeric_thrust = numeric_thrust / np.linalg.norm(numeric_thrust)\n        analytic_attitude = trajectory[i, 3:7]\n        if np.abs(np.linalg.norm(analytic_attitude) - 1.0) > 1e-6:\n            print(\"quaternion does not have unit norm!\")\n            print(analytic_attitude)\n            print(np.linalg.norm(analytic_attitude))\n            return False\n\n        e_z = np.array([0.0, 0.0, 1.0])\n        q_w = 1.0 + np.dot(e_z, numeric_thrust)\n        q_xyz = np.cross(e_z, numeric_thrust)\n        numeric_attitude = 0.5 * np.array([q_w] + q_xyz.tolist())\n        numeric_attitude = numeric_attitude / np.linalg.norm(numeric_attitude)\n        # the two attitudes can only differ in yaw --> check x,y component\n        q_diff = q_dot_q(quaternion_inverse(analytic_attitude), numeric_attitude)\n        errors[i, 1] = np.linalg.norm(q_diff[1:3])\n        if not np.allclose(q_diff[1:3], np.zeros(2, ), atol=1e-3, rtol=1e-3):\n            print(\"Attitude and acceleration do not match!\")\n            print(analytic_attitude)\n            print(numeric_attitude)\n            print(q_diff)\n            return False\n\n        # 3) check if bodyrates agree with attitude difference\n        numeric_bodyrates = 2.0 * q_dot_q(quaternion_inverse(trajectory[i, 3:7]), numeric_derivative[i, 3:7])[1:]\n        num_bodyrates.append(numeric_bodyrates)\n        analytic_bodyrates = trajectory[i, 10:13]\n        errors[i, 2] = np.linalg.norm(numeric_bodyrates - analytic_bodyrates)\n        if not np.allclose(numeric_bodyrates, analytic_bodyrates, atol=0.05, rtol=0.05):\n            print(\"inconsistent angular velocity\")\n            print(numeric_bodyrates)\n            print(analytic_bodyrates)\n            return False\n\n    print(\"Trajectory check successful\")\n    print(\"Maximum linear velocity error: %.5f\" % np.max(errors[:, 0]))\n    print(\"Maximum attitude error: %.5f\" % np.max(errors[:, 1]))\n    print(\"Maximum angular velocity error: %.5f\" % np.max(errors[:, 2]))\n\n    if plot:\n        num_bodyrates = np.stack(num_bodyrates)\n        plt.figure()\n        for i in range(3):\n            plt.subplot(3, 2, i * 2 + 1)\n            plt.plot(numeric_derivative[:, i], label='numeric')\n            plt.plot(trajectory[:, 7 + i], label='analytic')\n            plt.ylabel('m/s')\n            if i == 0:\n                plt.title(\"Velocity check\")\n            plt.legend()\n\n        for i in range(3):\n            plt.subplot(3, 2, i * 2 + 2)\n            plt.plot(num_bodyrates[:, i], label='numeric')\n            plt.plot(trajectory[:, 10 + i], label='analytic')\n            plt.ylabel('rad/s')\n            if i == 0:\n                plt.title(\"Body rate check\")\n            plt.legend()\n        plt.suptitle('Integrity check of reference trajectory')\n        plt.show()\n\n    return True\n\n\ndef minimum_snap_trajectory_generator(traj_derivatives, yaw_derivatives, t_ref, quad, map_limits, plot, adjust_pos=True):\n    \"\"\"\n    Follows the Minimum Snap Trajectory paper to generate a full trajectory given the position reference and its\n    derivatives, and the yaw trajectory and its derivatives.\n\n    :param traj_derivatives: np.array of shape 4x3xN. N corresponds to the length in samples of the trajectory, and:\n        - The 4 components of the first dimension correspond to position, velocity, acceleration and jerk.\n        - The 3 components of the second dimension correspond to x, y, z.\n    :param yaw_derivatives: np.array of shape 2xN. N corresponds to the length in samples of the trajectory. The first\n    row is the yaw trajectory, and the second row is the yaw time-derivative trajectory.\n    :param t_ref: vector of length N, containing the reference times (starting from 0) for the trajectory.\n    :param quad: Quadrotor3D object, corresponding to the quadrotor model that will track the generated reference.\n    :type quad: Quadrotor3D\n    :param map_limits: dictionary of map limits if available, None otherwise.\n    :param plot: True if show a plot of the generated trajectory.\n    :return: tuple of 3 arrays:\n        - Nx13 array of generated reference trajectory. The 13 dimension contains the components: position_xyz,\n        attitude_quaternion_wxyz, velocity_xyz, body_rate_xyz.\n        - N array of reference timestamps. The same as in the input\n        - Nx4 array of reference controls, corresponding to the four motors of the quadrotor.\n    \"\"\"\n\n    discretization_dt = t_ref[1] - t_ref[0]\n    len_traj = traj_derivatives.shape[2]\n\n    # Add gravity to accelerations\n    gravity = 9.81\n    thrust = traj_derivatives[2, :, :].T + np.tile(np.array([[0, 0, 1]]), (len_traj, 1)) * gravity\n    # Compute body axes\n    z_b = thrust / np.sqrt(np.sum(thrust ** 2, 1))[:, np.newaxis]\n\n    yawing = np.any(yaw_derivatives[0, :] != 0)\n\n    rate = np.zeros((len_traj, 3))\n    f_t = np.zeros((len_traj, 1))\n    for i in range(len_traj):\n        f_t[i, 0] = quad.mass * z_b[i].dot(thrust[i, :].T)\n\n    if yawing:\n        # yaw is defined as the projection of the body-x axis on the horizontal plane\n        x_c = np.concatenate((np.cos(yaw_derivatives[0, :])[:, np.newaxis],\n                              np.sin(yaw_derivatives[0, :])[:, np.newaxis],\n                              np.zeros(len_traj)[:, np.newaxis]), 1)\n        y_b = np.cross(z_b, x_c)\n        y_b = y_b / np.sqrt(np.sum(y_b ** 2, axis=1))[:, np.newaxis]\n        x_b = np.cross(y_b, z_b)\n\n        # Rotation matrix (from body to world)\n        b_r_w = np.concatenate((x_b[:, :, np.newaxis], y_b[:, :, np.newaxis], z_b[:, :, np.newaxis]), -1)\n        q = []\n        for i in range(len_traj):\n            # Transform to quaternion\n            q.append(rotation_matrix_to_quat(b_r_w[i]))\n            if i > 1:\n                q[-1] = undo_quaternion_flip(q[-2], q[-1])\n        q = np.stack(q)\n\n        # Compute angular rate vector\n        # Total thrust acceleration must be equal to the projection of the quadrotor acceleration into the Z body axis\n        a_proj = np.zeros((len_traj, 1))\n\n        for i in range(len_traj):\n            a_proj[i, 0] = z_b[i].dot(traj_derivatives[3, :, i])\n\n        h_omega = quad.mass / f_t * (traj_derivatives[3, :, :].T - a_proj * z_b)\n        for i in range(len_traj):\n            rate[i, 0] = -h_omega[i].dot(y_b[i])\n            rate[i, 1] = h_omega[i].dot(x_b[i])\n            rate[i, 2] = -yaw_derivatives[1, i] * np.array([0, 0, 1]).dot(z_b[i])\n\n    else:\n        # new way to compute attitude:\n        # https://math.stackexchange.com/questions/2251214/calculate-quaternions-from-two-directional-vectors\n        e_z = np.array([[0.0, 0.0, 1.0]])\n        q_w = 1.0 + np.sum(e_z * z_b, axis=1)\n        q_xyz = np.cross(e_z, z_b)\n        q = 0.5 * np.concatenate([np.expand_dims(q_w, axis=1), q_xyz], axis=1)\n        q = q / np.sqrt(np.sum(q ** 2, 1))[:, np.newaxis]\n\n        # Use numerical differentiation of quaternions\n        q_dot = np.gradient(q, axis=0) / discretization_dt\n        w_int = np.zeros((len_traj, 3))\n        for i in range(len_traj):\n            w_int[i, :] = 2.0 * q_dot_q(quaternion_inverse(q[i, :]), q_dot[i])[1:]\n        rate[:, 0] = w_int[:, 0]\n        rate[:, 1] = w_int[:, 1]\n        rate[:, 2] = w_int[:, 2]\n\n        go_crazy_about_yaw = True\n        if go_crazy_about_yaw:\n            print(\"Maximum yawrate before adaption: %.3f\" % np.max(np.abs(rate[:, 2])))\n            q_new = q\n            yaw_corr_acc = 0.0\n            for i in range(1, len_traj):\n                yaw_corr = -rate[i, 2] * discretization_dt\n                yaw_corr_acc += yaw_corr\n                q_corr = np.array([np.cos(yaw_corr_acc / 2.0), 0.0, 0.0, np.sin(yaw_corr_acc / 2.0)])\n                q_new[i, :] = q_dot_q(q[i, :], q_corr)\n                w_int[i, :] = 2.0 * q_dot_q(quaternion_inverse(q[i, :]), q_dot[i])[1:]\n\n            q_new_dot = np.gradient(q_new, axis=0) / discretization_dt\n            for i in range(1, len_traj):\n                w_int[i, :] = 2.0 * q_dot_q(quaternion_inverse(q_new[i, :]), q_new_dot[i])[1:]\n\n            q = q_new\n            rate[:, 0] = w_int[:, 0]\n            rate[:, 1] = w_int[:, 1]\n            rate[:, 2] = w_int[:, 2]\n            print(\"Maximum yawrate after adaption: %.3f\" % np.max(np.abs(rate[:, 2])))\n\n    # Compute inputs\n    rate_dot = np.gradient(rate, axis=0) / discretization_dt\n\n    rate_x_Jrate = np.array([(quad.J[2] - quad.J[1]) * rate[:, 2] * rate[:, 1],\n                             (quad.J[0] - quad.J[2]) * rate[:, 0] * rate[:, 2],\n                             (quad.J[1] - quad.J[0]) * rate[:, 1] * rate[:, 0]]).T\n\n    tau = rate_dot * quad.J[np.newaxis, :] + rate_x_Jrate\n    b = np.concatenate((tau, f_t), axis=-1)\n    a_mat = np.concatenate((quad.y_f[np.newaxis, :], -quad.x_f[np.newaxis, :],\n                            quad.z_l_tau[np.newaxis, :], np.ones_like(quad.z_l_tau)[np.newaxis, :]), 0)\n\n    reference_u = np.zeros((len_traj, 4))\n    for i in range(len_traj):\n        reference_u[i, :] = np.linalg.solve(a_mat, b[i, :])\n\n    full_pos = traj_derivatives[0, :, :].T\n    full_vel = traj_derivatives[1, :, :].T\n    reference_traj = np.concatenate((full_pos, q, full_vel, rate), 1)\n\n    if adjust_pos:\n        if map_limits is None:\n            # Locate starting point right at x=0 and y=0.\n            reference_traj[:, 0] -= reference_traj[0, 0]\n            reference_traj[:, 1] -= reference_traj[0, 1]\n\n        else:\n            x_max_range = map_limits[\"x\"][1] - map_limits[\"x\"][0]\n            y_max_range = map_limits[\"y\"][1] - map_limits[\"y\"][0]\n            z_max_range = map_limits[\"z\"][1] - map_limits[\"z\"][0]\n\n            x_center = x_max_range / 2 + map_limits[\"x\"][0]\n            y_center = y_max_range / 2 + map_limits[\"y\"][0]\n            z_center = z_max_range / 2 + map_limits[\"z\"][0]\n\n            # Center circle to center of map XY plane\n            reference_traj[:, :3] += np.array([x_center, y_center, 0])\n            reference_traj[:, 2] = z_center\n\n    if plot:\n        draw_poly(reference_traj, reference_u, t_ref)\n\n    # Change format of reference input to motor activation, in interval [0, 1]\n    reference_u = reference_u / quad.max_thrust\n\n    return reference_traj, t_ref, reference_u\n\n\ndef load_map_limits_from_file(map_limits):\n    import rospy\n    if map_limits is not None and map_limits != \"None\":\n        config_path = DirectoryConfig.CONFIG_DIR\n        params_file = os.path.join(config_path, map_limits + '.yaml')\n        try:\n            with open(params_file) as file:\n                limits = yaml.full_load(file)\n                map_limits = {\"x\": [limits[\"x_min\"], limits[\"x_max\"]],\n                              \"y\": [limits[\"y_min\"], limits[\"y_max\"]],\n                              \"z\": [limits[\"z_min\"], limits[\"z_max\"]]}\n                rospy.loginfo(\"Using world limits: \" + json.dumps(limits))\n        except FileNotFoundError:\n            warn_msg = \"Tried to load environment limits: '%s', but the file was not found. Using default limits.\" \\\n                       % map_limits\n            rospy.logwarn(warn_msg)\n            map_limits = None\n    else:\n        map_limits = None\n\n    return map_limits\n\n\ndef straight_trajectory(quad, discretization_dt, speed):\n    n = 3\n    pos_traj = np.zeros((n, 3))\n    pos_traj[:, 1] = np.linspace(-3, 3, n)\n    pos_traj[:, 2] = 1\n    att_traj = np.zeros_like(pos_traj)\n\n    av_dist = np.mean(np.sqrt(np.sum(np.diff(pos_traj, axis=0) ** 2, axis=1)))\n    av_dt = av_dist / speed\n\n    poly_pos_traj = fit_multi_segment_polynomial_trajectory(pos_traj.T, att_traj[:, -1].T)\n    traj, yaw, t_ref = get_full_traj(poly_pos_traj, target_dt=av_dt, int_dt=discretization_dt)\n\n    reference_traj, t_ref, reference_u = minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, None, False)\n    return reference_traj, t_ref, reference_u\n\n\ndef flyover_trajectory_collect(quad, discretization_dt, seed, speed, flyover_box_name):\n    np.random.seed(seed)\n    flyover_box = load_map_limits_from_file(flyover_box_name)\n    box_x_len = flyover_box['x'][1] - flyover_box['x'][0]\n    box_y_len = flyover_box['y'][1] - flyover_box['y'][0]\n    box_z_len = flyover_box['z'][1] - flyover_box['z'][0]\n    box_center = np.array([flyover_box['x'][0] + box_x_len / 2, flyover_box['y'][0] + box_y_len / 2])\n    box_diag = (box_x_len ** 2 + box_y_len ** 2) ** 0.5\n\n    pos_traj = np.zeros((50, 3))\n\n    for i in range(0, 50, 5):\n        rand_direction = (np.random.rand(2) - 0.5) * 2\n        rand_direction = rand_direction / np.linalg.norm(rand_direction)\n        height_start = flyover_box['z'][0] + np.random.rand() * box_z_len\n        start_point = box_center + rand_direction * 0.6 * box_diag\n\n        direction = (box_center - start_point)\n        direction = direction / np.linalg.norm(direction)\n\n        height_flyover = flyover_box['z'][0] + np.random.rand() * box_z_len\n        flyover_point = box_center - direction * 0.25 * box_diag\n\n        height_flyover_center = 0.55\n        flyover_center_point = box_center\n\n        height_flyover2 = flyover_box['z'][0] + np.random.rand() * box_z_len\n        flyover_point2 = box_center + direction * 0.25 * box_diag\n\n        height_target = flyover_box['z'][0] + np.random.rand() * box_z_len\n        target_point = box_center + direction * 0.7 * box_diag\n\n\n        pos_traj[i, :2] = start_point\n        pos_traj[i, 2] = height_start\n        pos_traj[i + 1, :2] = flyover_point\n        pos_traj[i + 1, 2] = height_flyover\n        pos_traj[i + 2, :2] = flyover_center_point\n        pos_traj[i + 2, 2] = height_flyover_center\n        pos_traj[i + 3, :2] = flyover_point2\n        pos_traj[i + 3, 2] = height_flyover2\n        pos_traj[i + 4, :2] = target_point\n        pos_traj[i + 4, 2] = height_target\n\n\n\n    att_traj = np.zeros_like(pos_traj)\n\n    av_dist = np.mean(np.sqrt(np.sum(np.diff(pos_traj, axis=0) ** 2, axis=1)))\n    av_dt = av_dist / speed\n\n    poly_pos_traj = fit_multi_segment_polynomial_trajectory(pos_traj.T, att_traj[:, -1].T)\n    traj, yaw, t_ref = get_full_traj(poly_pos_traj, target_dt=av_dt, int_dt=discretization_dt)\n    reference_traj, t_ref, reference_u = minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, None, False, adjust_pos=False)\n    return reference_traj, t_ref, reference_u\n\n\ndef flyover_trajectory(quad, discretization_dt, seed, speed, flyover_box_name):\n    np.random.seed(seed)\n    flyover_box = load_map_limits_from_file(flyover_box_name)\n    box_x_len = flyover_box['x'][1] - flyover_box['x'][0]\n    box_y_len = flyover_box['y'][1] - flyover_box['y'][0]\n    box_z_len = flyover_box['z'][1] - flyover_box['z'][0]\n    box_center = np.array([flyover_box['x'][0] + box_x_len / 2, flyover_box['y'][0] + box_y_len / 2])\n    box_diag = (box_x_len ** 2 + box_y_len ** 2) ** 0.5\n\n    pos_traj = np.zeros((17, 3))\n\n    rand_direction = (np.random.rand(2) - 0.5) * 2\n    rand_direction = rand_direction / np.linalg.norm(rand_direction)\n    height = flyover_box['z'][0] + np.random.rand() * box_z_len\n    start_point = box_center + rand_direction * 0.6 * box_diag\n\n    pos_traj[0, :2] = start_point\n    pos_traj[0, 2] = height\n\n    for i in range(1, 16, 3):\n        rand_direction = (np.random.rand(2) - 0.5) * 2\n        rand_direction = rand_direction / np.linalg.norm(rand_direction)\n        height = flyover_box['z'][0] + np.random.rand() * box_z_len\n        start_point = box_center + rand_direction * 0.6 * box_diag\n\n        direction = (box_center - start_point)\n        direction = direction / np.linalg.norm(direction)\n\n        flyover_center_point = box_center\n\n        target_point = box_center + direction * 0.6 * box_diag\n\n        pos_traj[i, :2] = start_point\n        pos_traj[i, 2] = height\n        pos_traj[i + 1, :2] = flyover_center_point\n        pos_traj[i + 1, 2] = height\n        pos_traj[i + 2, :2] = target_point\n        pos_traj[i + 2, 2] = height\n\n    target_point = box_center + direction * 1.2 * box_diag\n    pos_traj[i + 3, :2] = target_point\n    pos_traj[i + 3, 2] = height\n\n    att_traj = np.zeros_like(pos_traj)\n\n    av_dist = np.mean(np.sqrt(np.sum(np.diff(pos_traj, axis=0) ** 2, axis=1)))\n    av_dt = av_dist / speed\n\n    poly_pos_traj = fit_multi_segment_polynomial_trajectory(pos_traj.T, att_traj[:, -1].T)\n    traj, yaw, t_ref = get_full_traj(poly_pos_traj, target_dt=av_dt, int_dt=discretization_dt)\n    reference_traj, t_ref, reference_u = minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, None, False,\n                                                                           adjust_pos=False)\n    return reference_traj, t_ref, reference_u\n\n\ndef random_trajectory(quad, discretization_dt, seed, speed, map_name=None, plot=False):\n    map_limits = load_map_limits_from_file(map_name)\n\n    # Get a random smooth position trajectory\n    pos_traj, att_traj = random_periodical_trajectory(random_state=seed, map_limits=map_limits, plot=False)\n\n    if map_limits is None:\n        # Locate starting point right at x=0 and y=0.\n        pos_traj[:, 0] -= pos_traj[0, 0]\n        pos_traj[:, 1] -= pos_traj[0, 1]\n\n    # Ensure no negative z position\n    min_z_pos = np.min(pos_traj[:, -1])\n    if min_z_pos < 0:\n        pos_traj[:, -1] = pos_traj[:, -1] + 2 * min_z_pos * np.sign(min_z_pos)\n\n    # Calculate feasible time based on the average distance between two position track points:\n    av_dist = np.mean(np.sqrt(np.sum(np.diff(pos_traj, axis=0) ** 2, axis=1)))\n    av_dt = av_dist / speed\n\n    # Calculate the polynomial fit to the position trajectory, and compute the full kinematic reference. This\n    # trajectory is sampled according to the frequency that will be needed for the MPC.\n    poly_pos_traj = fit_multi_segment_polynomial_trajectory(pos_traj.T, att_traj[:, -1].T)\n\n    traj, yaw, t_ref = get_full_traj(poly_pos_traj, av_dt, discretization_dt)\n\n    reference_traj, t_ref, reference_u = minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, map_limits, False)\n    if plot:\n        draw_poly(reference_traj, reference_u, t_ref, pos_traj.T)\n\n    return reference_traj, t_ref, reference_u\n\n\ndef loop_trajectory(quad, discretization_dt, radius, z, lin_acc, clockwise, yawing, v_max, map_name, plot):\n    \"\"\"\n    Creates a circular trajectory on the x-y plane that increases speed by 1m/s at every revolution.\n\n    :param quad: Quadrotor model\n    :param discretization_dt: Sampling period of the trajectory.\n    :param radius: radius of loop trajectory in meters\n    :param z: z position of loop plane in meters\n    :param lin_acc: linear acceleration of trajectory (and successive deceleration) in m/s^2\n    :param clockwise: True if the rotation will be done clockwise.\n    :param yawing: True if the quadrotor yaws along the trajectory. False for 0 yaw trajectory.\n    :param v_max: Maximum speed at peak velocity. Revolutions needed will be calculated automatically.\n    :param map_name: Name of map to load its limits\n    :param plot: Whether to plot an analysis of the planned trajectory or not.\n    :return: The full 13-DoF trajectory with time and input vectors\n    \"\"\"\n\n    # Apply map limits to radius\n    map_limits = load_map_limits_from_file(map_name)\n    if map_limits is not None:\n        x_max_range = map_limits[\"x\"][1] - map_limits[\"x\"][0]\n        y_max_range = map_limits[\"y\"][1] - map_limits[\"y\"][0]\n\n        max_radius = min(x_max_range / 2, y_max_range / 2)\n\n        radius = min(radius, max_radius)\n\n    assert z > 0\n\n    ramp_up_t = 2  # s\n\n    # Calculate simulation time to achieve desired maximum velocity with specified acceleration\n    t_total = 2 * v_max / lin_acc + 2 * ramp_up_t\n\n    # Transform to angular acceleration\n    alpha_acc = lin_acc / radius  # rad/s^2\n\n    # Generate time and angular acceleration sequences\n    # Ramp up sequence\n    ramp_t_vec = np.arange(0, ramp_up_t, discretization_dt)\n    ramp_up_alpha = alpha_acc * np.sin(np.pi / (2 * ramp_up_t) * ramp_t_vec) ** 2\n    # Acceleration phase\n    coasting_duration = (t_total - 4 * ramp_up_t) / 2\n    coasting_t_vec = ramp_up_t + np.arange(0, coasting_duration, discretization_dt)\n    coasting_alpha = np.ones_like(coasting_t_vec) * alpha_acc\n    # Transition phase: decelerate\n    transition_t_vec = np.arange(0, 2 * ramp_up_t, discretization_dt)\n    transition_alpha = alpha_acc * np.cos(np.pi / (2 * ramp_up_t) * transition_t_vec)\n    transition_t_vec += coasting_t_vec[-1] + discretization_dt\n    # Deceleration phase\n    down_coasting_t_vec = transition_t_vec[-1] + np.arange(0, coasting_duration, discretization_dt) + discretization_dt\n    down_coasting_alpha = -np.ones_like(down_coasting_t_vec) * alpha_acc\n    # Bring to rest phase\n    ramp_up_t_vec = down_coasting_t_vec[-1] + np.arange(0, ramp_up_t, discretization_dt) + discretization_dt\n    ramp_up_alpha_end = ramp_up_alpha - alpha_acc\n\n    # Concatenate all sequences\n    t_ref = np.concatenate((ramp_t_vec, coasting_t_vec, transition_t_vec, down_coasting_t_vec, ramp_up_t_vec))\n    alpha_vec = np.concatenate((\n        ramp_up_alpha, coasting_alpha, transition_alpha, down_coasting_alpha, ramp_up_alpha_end))\n\n    # Calculate derivative of angular acceleration (alpha_vec)\n    ramp_up_alpha_dt = alpha_acc * np.pi / (2 * ramp_up_t) * np.sin(np.pi / ramp_up_t * ramp_t_vec)\n    coasting_alpha_dt = np.zeros_like(coasting_alpha)\n    transition_alpha_dt = - alpha_acc * np.pi / (2 * ramp_up_t) * np.sin(np.pi / (2 * ramp_up_t) * transition_t_vec)\n    alpha_dt = np.concatenate((\n        ramp_up_alpha_dt, coasting_alpha_dt, transition_alpha_dt, coasting_alpha_dt, ramp_up_alpha_dt))\n\n    if not clockwise:\n        alpha_vec *= -1\n        alpha_dt *= -1\n\n    # Compute angular integrals\n    w_vec = np.cumsum(alpha_vec) * discretization_dt\n    angle_vec = np.cumsum(w_vec) * discretization_dt\n\n    # Compute position, velocity, acceleration, jerk\n    pos_traj_x = radius * np.sin(angle_vec)[np.newaxis, np.newaxis, :]\n    pos_traj_y = radius * np.cos(angle_vec)[np.newaxis, np.newaxis, :]\n    pos_traj_z = np.ones_like(pos_traj_x) * z\n\n    vel_traj_x = (radius * w_vec * np.cos(angle_vec))[np.newaxis, np.newaxis, :]\n    vel_traj_y = - (radius * w_vec * np.sin(angle_vec))[np.newaxis, np.newaxis, :]\n\n    acc_traj_x = radius * (alpha_vec * np.cos(angle_vec) - w_vec ** 2 * np.sin(angle_vec))[np.newaxis, np.newaxis, :]\n    acc_traj_y = - radius * (alpha_vec * np.sin(angle_vec) + w_vec ** 2 * np.cos(angle_vec))[np.newaxis, np.newaxis, :]\n\n    jerk_traj_x = radius * (alpha_dt * np.cos(angle_vec) - alpha_vec * np.sin(angle_vec) * w_vec -\n                            np.cos(angle_vec) * w_vec ** 3 - 2 * np.sin(angle_vec) * w_vec * alpha_vec)\n    jerk_traj_y = - radius * (np.cos(angle_vec) * w_vec * alpha_vec + np.sin(angle_vec) * alpha_dt -\n                              np.sin(angle_vec) * w_vec ** 3 + 2 * np.cos(angle_vec) * w_vec * alpha_vec)\n    jerk_traj_x = jerk_traj_x[np.newaxis, np.newaxis, :]\n    jerk_traj_y = jerk_traj_y[np.newaxis, np.newaxis, :]\n\n    if yawing:\n        yaw_traj = -angle_vec\n    else:\n        yaw_traj = np.zeros_like(angle_vec)\n\n    traj = np.concatenate((\n        np.concatenate((pos_traj_x, pos_traj_y, pos_traj_z), 1),\n        np.concatenate((vel_traj_x, vel_traj_y, np.zeros_like(vel_traj_x)), 1),\n        np.concatenate((acc_traj_x, acc_traj_y, np.zeros_like(acc_traj_x)), 1),\n        np.concatenate((jerk_traj_x, jerk_traj_y, np.zeros_like(jerk_traj_x)), 1)), 0)\n\n    yaw = np.concatenate((yaw_traj[np.newaxis, :], w_vec[np.newaxis, :]), 0)\n\n    return minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, map_limits, plot)\n\n\ndef lemniscate_trajectory(quad, discretization_dt, radius, z, lin_acc, clockwise, yawing, v_max, map_name, plot):\n    \"\"\"\n\n    :param quad:\n    :param discretization_dt:\n    :param radius:\n    :param z:\n    :param lin_acc:\n    :param clockwise:\n    :param yawing:\n    :param v_max:\n    :param map_name:\n    :param plot:\n    :return:\n    \"\"\"\n\n    # Apply map limits to radius\n    map_limits = load_map_limits_from_file(map_name)\n    if map_limits is not None:\n        x_max_range = map_limits[\"x\"][1] - map_limits[\"x\"][0]\n        y_max_range = map_limits[\"y\"][1] - map_limits[\"y\"][0]\n\n        max_radius = min(x_max_range / 2, y_max_range / 2)\n\n        radius = min(radius, max_radius)\n\n    assert z > 0\n\n    ramp_up_t = 2  # s\n\n    # Calculate simulation time to achieve desired maximum velocity with specified acceleration\n    t_total = 2 * v_max / lin_acc + 2 * ramp_up_t\n\n    # Transform to angular acceleration\n    alpha_acc = lin_acc / radius  # rad/s^2\n\n    # Generate time and angular acceleration sequences\n    # Ramp up sequence\n    ramp_t_vec = np.arange(0, ramp_up_t, discretization_dt)\n    ramp_up_alpha = alpha_acc * np.sin(np.pi / (2 * ramp_up_t) * ramp_t_vec) ** 2\n    # Acceleration phase\n    coasting_duration = (t_total - 4 * ramp_up_t) / 2\n    coasting_t_vec = ramp_up_t + np.arange(0, coasting_duration, discretization_dt)\n    coasting_alpha = np.ones_like(coasting_t_vec) * alpha_acc\n    # Transition phase: decelerate\n    transition_t_vec = np.arange(0, 2 * ramp_up_t, discretization_dt)\n    transition_alpha = alpha_acc * np.cos(np.pi / (2 * ramp_up_t) * transition_t_vec)\n    transition_t_vec += coasting_t_vec[-1] + discretization_dt\n    # Deceleration phase\n    down_coasting_t_vec = transition_t_vec[-1] + np.arange(0, coasting_duration, discretization_dt) + discretization_dt\n    down_coasting_alpha = -np.ones_like(down_coasting_t_vec) * alpha_acc\n    # Bring to rest phase\n    ramp_up_t_vec = down_coasting_t_vec[-1] + np.arange(0, ramp_up_t, discretization_dt) + discretization_dt\n    ramp_up_alpha_end = ramp_up_alpha - alpha_acc\n\n    # Concatenate all sequences\n    t_ref = np.concatenate((ramp_t_vec, coasting_t_vec, transition_t_vec, down_coasting_t_vec, ramp_up_t_vec))\n    alpha_vec = np.concatenate((\n        ramp_up_alpha, coasting_alpha, transition_alpha, down_coasting_alpha, ramp_up_alpha_end))\n\n    # Compute angular integrals\n    w_vec = np.cumsum(alpha_vec) * discretization_dt\n    angle_vec = np.cumsum(w_vec) * discretization_dt\n\n    # Adaption: we achieve the highest spikes in the bodyrates when passing through the 'center' part of the figure-8\n    # This leads to negative reference thrusts.\n    # Let's see if we can alleviate this by adapting the z-reference in these parts to add some acceleration in the\n    # z-component\n    z_dim = 0.0\n\n    # Compute position, velocity, acceleration, jerk\n    pos_traj_x = radius * np.cos(angle_vec)[np.newaxis, np.newaxis, :]\n    pos_traj_y = radius * (np.sin(angle_vec) * np.cos(angle_vec))[np.newaxis, np.newaxis, :]\n    pos_traj_z = - z_dim * np.cos(4.0 * angle_vec)[np.newaxis, np.newaxis, :] + z\n\n    vel_traj_x = -radius * (w_vec * np.sin(angle_vec))[np.newaxis, np.newaxis, :]\n    vel_traj_y = radius * (w_vec * np.cos(angle_vec) ** 2 - w_vec * np.sin(angle_vec) ** 2)[np.newaxis, np.newaxis, :]\n    vel_traj_z = 4.0 * z_dim * w_vec * np.sin(4.0 * angle_vec)[np.newaxis, np.newaxis, :]\n\n    acc_traj_x = -radius * (alpha_vec * np.sin(angle_vec) + w_vec ** 2 * np.cos(angle_vec))\n    acc_traj_y = radius * (alpha_vec * np.cos(angle_vec) ** 2 - 2.0 * w_vec ** 2 * np.cos(angle_vec) * np.sin(\n        angle_vec) - alpha_vec * np.sin(angle_vec) ** 2 - 2.0 * w_vec ** 2 * np.sin(angle_vec) * np.cos(angle_vec))\n    acc_traj_z = 16.0 * z_dim * (w_vec ** 2 * np.cos(4.0 * angle_vec) + alpha_vec * np.sin(4.0 * angle_vec))\n    acc_traj_x = acc_traj_x[np.newaxis, np.newaxis, :]\n    acc_traj_y = acc_traj_y[np.newaxis, np.newaxis, :]\n    acc_traj_z = acc_traj_z[np.newaxis, np.newaxis, :]\n\n    traj = np.concatenate((\n        np.concatenate((pos_traj_x, pos_traj_y, pos_traj_z), 1),\n        np.concatenate((vel_traj_x, vel_traj_y, vel_traj_z), 1),\n        np.concatenate((acc_traj_x, acc_traj_y, acc_traj_z), 1)), 0)\n\n    yaw = np.zeros_like(traj)\n\n    return minimum_snap_trajectory_generator(traj, yaw, t_ref, quad, map_limits, plot)\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/trajectory_generator.py",
    "content": "import numpy as np\nimport matplotlib.pyplot as plt\n\n\ndef draw_poly(traj, u_traj, t, target_points=None, target_t=None):\n    \"\"\"\n    Plots the generated trajectory of length n with the used keypoints.\n    :param traj: Full generated reference trajectory. Numpy array of shape nx13\n    :param u_traj: Generated reference inputs. Numpy array of shape nx4\n    :param t: Timestamps of the references. Numpy array of length n\n    :param target_points: m position keypoints used for trajectory generation. Numpy array of shape 3 x m.\n    :param target_t: Timestamps of the reference position keypoints. If not passed, then they are extracted from the\n    t vector, assuming constant time separation.\n    \"\"\"\n\n    ders = 2\n    dims = 3\n\n    y_labels = [r'pos $[m]$', r'vel $[m/s]$', r'acc $[m/s^2]$', r'jer $[m/s^3]$']\n    dim_legends = ['x', 'y', 'z']\n\n    if target_t is None and target_points is not None:\n        target_t = np.linspace(0, t[-1], target_points.shape[1])\n\n    p_traj = traj[:, :3]\n    a_traj = traj[:, 3:7]\n    v_traj = traj[:, 7:10]\n    r_traj = traj[:, 10:]\n\n    plt_traj = [p_traj, v_traj]\n\n    fig = plt.figure()\n    for d_ord in range(ders):\n\n        plt.subplot(ders + 2, 2, d_ord * 2 + 1)\n\n        for dim in range(dims):\n\n            plt.plot(t, plt_traj[d_ord][:, dim], label=dim_legends[dim])\n\n            if d_ord == 0 and target_points is not None:\n                plt.plot(target_t, target_points[dim, :], 'bo')\n\n        plt.gca().set_xticklabels([])\n        plt.legend()\n        plt.grid()\n        plt.ylabel(y_labels[d_ord])\n\n    dim_legends = [['w', 'x', 'y', 'z'], ['x', 'y', 'z']]\n    y_labels = [r'att $[quat]$', r'rate $[rad/s]$']\n    plt_traj = [a_traj, r_traj]\n    for d_ord in range(ders):\n\n        plt.subplot(ders + 2, 2, d_ord * 2 + 1 + ders * 2)\n        for dim in range(plt_traj[d_ord].shape[1]):\n            plt.plot(t, plt_traj[d_ord][:, dim], label=dim_legends[d_ord][dim])\n\n        plt.legend()\n        plt.grid()\n        plt.ylabel(y_labels[d_ord])\n        if d_ord == ders - 1:\n            plt.xlabel(r'time $[s]$')\n        else:\n            plt.gca().set_xticklabels([])\n\n    ax = fig.add_subplot(2, 2, 2, projection='3d')\n    plt.plot(p_traj[:, 0], p_traj[:, 1], p_traj[:, 2])\n    if target_points is not None:\n        plt.plot(target_points[0, :], target_points[1, :], target_points[2, :], 'bo')\n    plt.title('Target position trajectory')\n    ax.set_xlabel(r'$p_x [m]$')\n    ax.set_ylabel(r'$p_y [m]$')\n    ax.set_zlabel(r'$p_z [m]$')\n\n    plt.subplot(ders + 1, 2, (ders + 1) * 2)\n    for i in range(u_traj.shape[1]):\n        plt.plot(t, u_traj[:, i], label=r'$u_{}$'.format(i))\n    plt.grid()\n    plt.legend()\n    plt.gca().yaxis.set_label_position(\"right\")\n    plt.gca().yaxis.tick_right()\n    plt.xlabel(r'time $[s]$')\n    plt.ylabel(r'single thrusts $[N]$')\n    plt.title('Control inputs')\n\n    plt.suptitle('Generated polynomial trajectory')\n\n    plt.show()\n\n\ndef get_full_traj(poly_coeffs, target_dt, int_dt):\n\n    dims = poly_coeffs.shape[-1]\n    full_traj = np.zeros((4, dims, 0))\n    t_total = np.zeros((0,))\n\n    if isinstance(target_dt, float):\n        # Adjust target_dt to make it divisible by int_dt\n        target_dt = round(target_dt / int_dt) * int_dt\n\n        # Assign target time for each keypoint using homogeneous spacing\n        t_vec = np.arange(0, target_dt * (poly_coeffs.shape[0] + 1) - 1e-5, target_dt)\n\n    else:\n        # The time between each pair of points is assigned independently\n        # First, also adjust each value of the target_dt vector to make it divisible by int_dt\n        for i, dt in enumerate(target_dt):\n            target_dt[i] = round(dt / int_dt) * int_dt\n\n        t_vec = np.append(np.zeros(1), np.cumsum(target_dt[:-1]))\n\n    for seg in range(len(t_vec) - 1):\n\n        # Select time sampling (linear or quadratic) mode\n        tau_dt = np.arange(t_vec[seg], t_vec[seg + 1] + 1e-5, int_dt)\n\n        # Re-normalize time sampling vector between -1 and 1\n        t1 = (tau_dt - t_vec[seg]) / (t_vec[seg + 1] - t_vec[seg]) * 2 - 1\n\n        # Compression ratio\n        compress = 2 / np.diff(t_vec)[seg]\n\n        # Integrate current segment of trajectory\n        traj = np.zeros((4, dims, len(t1)))\n\n        for der_order in range(4):\n            for i in range(dims):\n                traj[der_order, i, :] = np.polyval(np.polyder(poly_coeffs[seg, :, i], der_order), t1) * (compress ** der_order)\n\n        if seg < len(t_vec) - 2:\n            # Remove last sample (will be the initial point of next segment)\n            traj = traj[:, :, :-1]\n            t_seg = tau_dt[:-1]\n        else:\n            t_seg = tau_dt\n\n        full_traj = np.concatenate((full_traj, traj), axis=-1)\n        t_total = np.concatenate((t_total, t_seg))\n\n    # Separate into p_xyz and yaw trajectories\n    yaw_traj = full_traj[:, -1, :]\n    full_traj = full_traj[:, :-1, :]\n\n    return full_traj, yaw_traj, t_total\n\n\ndef fit_multi_segment_polynomial_trajectory(p_targets, yaw_targets):\n\n    p_targets = np.concatenate((p_targets, yaw_targets[np.newaxis, :]), 0)\n    m = multiple_waypoints(p_targets.shape[1] - 1)\n\n    dims = p_targets.shape[0]\n    n_segments = p_targets.shape[1]\n\n    poly_coefficients = np.zeros((n_segments - 1, 8, dims))\n    for dim in range(dims):\n        b = rhs_generation(p_targets[dim, :])\n        poly_coefficients[:, :, dim] = np.fliplr(np.linalg.solve(m, b).reshape(n_segments - 1, 8))\n\n    return poly_coefficients\n\n\ndef matrix_generation(ts):\n    b = np.array([[1, ts,  ts**2, ts**3,    ts**4,    ts**5,     ts**6,     ts**7],\n                  [0, 1, 2*ts,  3*ts**2,  4*ts**3,  5*ts**4,   6*ts**5,   7*ts**6],\n                  [0, 0, 2,     6*ts,    12*ts**2, 20*ts**3,  30*ts**4,  42*ts**5],\n                  [0, 0, 0,     6,       24*ts,    60*ts**2, 120*ts**3, 210*ts**4],\n                  [0, 0, 0,     0,       24,      120*ts,    360*ts**2, 840*ts**3],\n                  [0, 0, 0,     0,       0,       120,       720*ts,   2520*ts**2],\n                  [0, 0, 0,     0,       0,       0,         720,      5040*ts],\n                  [0, 0, 0,     0,       0,       0,         0,        5040]])\n\n    return b\n\n\ndef multiple_waypoints(n_segments):\n\n    m = np.zeros((8 * n_segments, 8 * n_segments))\n\n    for i in range(n_segments):\n\n        if i == 0:\n\n            # initial condition of the first curve\n            b = matrix_generation(-1.0)\n            m[8 * i:8 * i + 4, 8 * i:8 * i + 8] = b[:4, :]\n\n            # intermediary condition of the first curve\n            b = matrix_generation(1.0)\n            m[8 * i + 4:8 * i + 7 + 4, 8 * i:8 * i + 8] = b[:-1, :]\n\n            # starting condition of the second curve position and derivatives\n            b = matrix_generation(-1.0)\n            m[8 * i + 4 + 1:8 * i + 4 + 7, 8 * (i + 1):8 * (i + 1) + 8] = -b[1:-1, :]\n            m[8 * i + 4 + 7:8 * i + 4 + 8, 8 * (i + 1):8 * (i + 1) + 8] = b[0, :]\n\n        elif i != n_segments - 1:\n\n            # starting condition of the ith curve position and derivatives\n            b = matrix_generation(1.0)\n            m[8 * i + 4:8 * i + 7 + 4, 8 * i:8 * i + 8] = b[:-1, :]\n\n            # end condition of the ith curve position and derivatives\n            b = matrix_generation(-1.0)\n            m[8 * i + 4 + 1:8 * i + 4 + 7, 8 * (i + 1):8 * (i + 1) + 8] = -b[1:-1, :]\n            m[8 * i + 4 + 7:8 * i + 4 + 8, 8 * (i + 1):8 * (i + 1) + 8] = b[0, :]\n\n        if i == n_segments - 1:\n            # end condition of the final curve position and derivatives (4 boundary conditions)\n            b = matrix_generation(1.0)\n            m[8 * i + 4:8 * i + 4 + 4, 8 * i:8 * i + 8] = b[:4, :]\n\n    return m\n\n\ndef fit_single_segment(p_start, p_end, v_start=None, v_end=None, a_start=None, a_end=None, j_start=None, j_end=None):\n\n    if v_start is None:\n        v_start = np.array([0, 0])\n    if v_end is None:\n        v_end = np.array([0, 0])\n    if a_start is None:\n        a_start = np.array([0, 0])\n    if a_end is None:\n        a_end = np.array([0, 0])\n    if j_start is None:\n        j_start = np.array([0, 0])\n    if j_end is None:\n        j_end = np.array([0, 0])\n\n    poly_coefficients = np.zeros((8, len(p_start)))\n\n    tf = 1\n    ti = -1\n    A = np.array(([\n        [1 * tf ** 7,   1 * tf ** 6,   1 * tf ** 5,   1 * tf ** 4,   1 * tf ** 3,  1 * tf ** 2,  1 * tf ** 1,  1],\n        [7 * tf ** 6,   6 * tf ** 5,   5 * tf ** 4,   4 * tf ** 3,   3 * tf ** 2,  2 * tf ** 1,  1,            0],\n        [42 * tf ** 5,  30 * tf ** 4,  20 * tf ** 3,  12 * tf ** 2,  6 * tf ** 1,  2,            0,            0],\n        [210 * tf ** 4, 120 * tf ** 3, 60 * tf ** 2,  24 * tf ** 1,  6,            0,            0,            0],\n        [1 * ti ** 7,   1 * ti ** 6,   1 * ti ** 5,   1 * ti ** 4,   1 * ti ** 3,  1 * ti ** 2,  1 * ti ** 1,  1],\n        [7 * ti ** 6,   6 * ti ** 5,   5 * ti ** 4,   4 * ti ** 3,   3 * ti ** 2,  2 * ti ** 1,  1,            0],\n        [42 * ti ** 5,  30 * ti ** 4,  20 * ti ** 3,  12 * ti ** 2,  6 * ti ** 1,  2,            0,            0],\n        [210 * ti ** 4, 120 * ti ** 3, 60 * ti ** 2,  24 * ti ** 1,  6,            0,            0,            0]]))\n\n    A = np.tile(A[:, :, np.newaxis], (1, 1, len(p_start)))\n\n    b = np.concatenate((p_end, v_end, a_end, j_end, p_start, v_start, a_start, j_start)).reshape(8, -1)\n\n    for i in range(len(p_start)):\n        poly_coefficients[:, i] = np.linalg.inv(A[:, :, i]).dot(np.array(b[:, i]))\n\n    return np.expand_dims(poly_coefficients, 0)\n\n\ndef rhs_generation(x):\n    n = x.shape[0] - 1\n\n    big_x = np.zeros((8 * n))\n    big_x[:4] = np.array([x[0], 0, 0, 0]).T\n    big_x[-4:] = np.array([x[-1], 0, 0, 0]).T\n\n    for i in range(1, n):\n        big_x[8 * (i - 1) + 4:8 * (i - 1) + 8 + 4] = np.array([x[i], 0, 0, 0, 0, 0, 0, x[i]]).T\n\n    return big_x\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/utils.py",
    "content": "\"\"\" Miscellaneous utility functions.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\n\nimport os\nimport math\nimport json\nimport errno\nimport shutil\nimport joblib\nimport random\nimport pyquaternion\nimport numpy as np\nimport casadi as cs\nfrom sklearn import preprocessing\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom scipy.interpolate.interpolate import interp1d\nfrom config.configuration_parameters import DirectoryConfig as GPConfig\nimport matplotlib.pyplot as plt\nimport xml.etree.ElementTree as XMLtree\n\n\ndef safe_mkdir_recursive(directory, overwrite=False):\n    if not os.path.exists(directory):\n        try:\n            os.makedirs(directory)\n        except OSError as exc:\n            if exc.errno == errno.EEXIST and os.path.isdir(directory):\n                pass\n            else:\n                raise\n    else:\n        if overwrite:\n            try:\n                shutil.rmtree(directory)\n            except:\n                print('Error while removing directory: {0}'.format(directory))\n\n\ndef safe_mknode_recursive(destiny_dir, node_name, overwrite):\n    safe_mkdir_recursive(destiny_dir)\n    if overwrite and os.path.exists(os.path.join(destiny_dir, node_name)):\n        os.remove(os.path.join(destiny_dir, node_name))\n    if not os.path.exists(os.path.join(destiny_dir, node_name)):\n        open(os.path.join(destiny_dir, node_name), 'w').close()\n        #os.mknod(os.path.join(destiny_dir, node_name))\n        return False\n    return True\n\n\ndef jsonify(array):\n    if isinstance(array, np.ndarray):\n        return array.tolist()\n    if isinstance(array, list):\n        return array\n    return array\n\n\ndef undo_jsonify(array):\n    x = []\n    for elem in array:\n        a = elem.split('[')[1].split(']')[0].split(',')\n        a = [float(num) for num in a]\n        x = x + [a]\n    return np.array(x)\n\n\ndef get_data_dir_and_file(ds_name, training_split, params, read_only=False):\n    \"\"\"\n    Returns the directory and file name where to store the next simulation-based dataset.\n    \"\"\"\n\n    # Data folder directory\n    rec_file_dir = GPConfig.DATA_DIR\n\n    # Look for file\n    f = []\n    d = []\n    for (dir_path, dir_names, file_names) in os.walk(rec_file_dir):\n        f.extend(file_names)\n        d.extend(dir_names)\n        break\n\n    split = \"train\" if training_split else \"test\"\n\n    # Check if current dataset folder already exists. Create it otherwise\n    if ds_name in d and os.path.exists(os.path.join(rec_file_dir, ds_name, split)):\n        dataset_instances = []\n        for (_, _, file_names) in os.walk(os.path.join(rec_file_dir, ds_name, split)):\n            dataset_instances.extend([os.path.splitext(file)[0] for file in file_names if not file.startswith('.')])\n    else:\n        if read_only:\n            return None\n        safe_mkdir_recursive(os.path.join(rec_file_dir, ds_name, split))\n        dataset_instances = []\n\n    json_file_name = os.path.join(rec_file_dir, 'metadata.json')\n    if not f:\n        if read_only:\n            return None\n        # Metadata file not found -> make new one\n        with open(json_file_name, 'w') as json_file:\n            ds_instance_name = \"dataset_001\"\n            json.dump({ds_name: {split: {ds_instance_name: params}}}, json_file, indent=4)\n    else:\n        # Metadata file existing\n        with open(json_file_name) as json_file:\n            metadata = json.load(json_file)\n\n        # Check if current dataset name with data split exists in metadata:\n        if ds_name in metadata.keys() and split in metadata[ds_name].keys():\n\n            existing_instance_idx = -1\n            for i, instance in enumerate(dataset_instances):\n                if metadata[ds_name][split][instance] == params:\n                    existing_instance_idx = i\n                    if not read_only:\n                        print(\"This configuration already exists in the dataset with the same name.\")\n\n            if existing_instance_idx == -1:\n\n                if read_only:\n                    return None\n\n                if dataset_instances:\n                    # Dataset exists but this is a new instance\n                    existing_instances = [int(instance.split(\"_\")[1]) for instance in dataset_instances]\n                    max_instance_number = max(existing_instances)\n                    ds_instance_name = \"dataset_\" + str(max_instance_number + 1).zfill(3)\n\n                    # Add to metadata dictionary the new instance\n                    metadata[ds_name][split][ds_instance_name] = params\n\n                else:\n                    # Edge case where, for some error, there was something added to the metadata file but no actual\n                    # datasets were recorded. Remove entries from metadata and add them again.\n                    ds_instance_name = \"dataset_001\"\n                    metadata[ds_name][split] = {}\n                    metadata[ds_name][split][ds_instance_name] = params\n\n            else:\n                # Dataset exists and there is an instance with the same configuration\n                ds_instance_name = dataset_instances[existing_instance_idx]\n\n        else:\n            if read_only:\n                return None\n\n            ds_instance_name = \"dataset_001\"\n            safe_mkdir_recursive(os.path.join(rec_file_dir, ds_name, split))\n\n            # Add to metadata dictionary the new instance\n            if ds_name in metadata.keys():\n                metadata[ds_name][split] = {ds_instance_name: params}\n            else:\n                metadata[ds_name] = {split: {ds_instance_name: params}}\n\n        if not read_only:\n            with open(json_file_name, 'w') as json_file:\n                json.dump(metadata, json_file, indent=4)\n\n    return os.path.join(rec_file_dir, ds_name, split), ds_instance_name + '.csv'\n\n\ndef get_model_dir_and_file(model_options):\n    directory = os.path.join(GPConfig.SAVE_DIR, str(model_options[\"git\"]), str(model_options[\"model_name\"]))\n\n    model_params = model_options[\"params\"]\n    file_name = ''\n    model_vars = list(model_params.keys())\n    model_vars.sort()\n    for i, param in enumerate(model_vars):\n        if i > 0:\n            file_name += '__'\n        file_name += 'no_' if not model_params[param] else ''\n        file_name += param\n\n    return directory, file_name\n\n\ndef load_pickled_models(directory='', file_name='', model_options=None):\n    \"\"\"\n    Loads a pre-trained model from the specified directory, contained in a given pickle filename. Otherwise, if\n    the model_options dictionary is given, use its contents to reconstruct the directory location of the pre-trained\n    model fitting the requirements.\n\n    :param directory: directory where the model file is located\n    :param file_name: file name of the pre-trained model\n    :param model_options: dictionary with the keys: \"noisy\" (bool), \"drag\" (bool), \"git\" (string), \"training_samples\"\n    (int), \"payload\" (bool).\n\n    :return: a dictionary with the recovered models from the pickle files.\n    \"\"\"\n\n    if model_options is not None:\n        directory, file_name = get_model_dir_and_file(model_options)\n\n    try:\n        pickled_files = os.listdir(directory)\n    except FileNotFoundError:\n        return None\n\n    loaded_models = []\n\n    try:\n        for file in pickled_files:\n            if not file.startswith(file_name) and file != 'feats.csv':\n                continue\n            if '.pt' in file:\n                directory, file_name = get_model_dir_and_file(load_ops)\n                saved_dict = torch.load(os.path.join(directory, f\"{file_name}.pt\"))\n                mlp_model = dc.nn.MultiLayerPerceptron(saved_dict['input_size'], saved_dict['hidden_size'],\n                                               saved_dict['output_size'], saved_dict['hidden_layers'], 'Tanh')\n                model = NormalizedMLP(mlp_model, torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['input_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['output_size'],))).float(),\n                                      torch.tensor(np.zeros((saved_dict['output_size'],))).float())\n                model.load_state_dict(saved_dict['state_dict'])\n                model.eval()\n                pre_trained_models = model\n                loaded_models.append(joblib.load(os.path.join(directory, file)))\n                continue\n            if '.pkl' not in file and '.csv' not in file:\n                continue\n            if '.pkl' in file:\n                loaded_models.append(joblib.load(os.path.join(directory, file)))\n\n    except IsADirectoryError:\n        raise FileNotFoundError(\"Tried to load file from directory %s, but it was not found.\" % directory)\n\n    if loaded_models is not None:\n        if loaded_models:\n            pre_trained_models = {\"models\": loaded_models}\n        else:\n            pre_trained_models = None\n    else:\n        pre_trained_models = None\n\n    return pre_trained_models\n\n\ndef interpol_mse(t_1, x_1, t_2, x_2, n_interp_samples=1000):\n    if np.all(t_1 == t_2):\n        return np.mean(np.sqrt(np.sum((x_1 - x_2) ** 2, axis=1)))\n\n    assert x_1.shape[1] == x_2.shape[1]\n\n    t_min = max(t_1[0], t_2[0])\n    t_max = min(t_1[-1], t_2[-1])\n\n    t_interp = np.linspace(t_min, t_max, n_interp_samples)\n    err = np.zeros((n_interp_samples, x_1.shape[1]))\n\n    for dim in range(x_1.shape[1]):\n        x1_interp = interp1d(t_1, x_1[:, dim], kind='cubic')\n        x2_interp = interp1d(t_2, x_2[:, dim], kind='cubic')\n\n        x1_sample = x1_interp(t_interp)\n        x2_sample = x2_interp(t_interp)\n\n        err[:, dim] = x1_sample - x2_sample\n\n    return np.mean(np.sqrt(np.sum(err ** 2, axis=1)))\n\n\ndef euclidean_dist(x, y, thresh=None):\n    \"\"\"\n    Measures the euclidean distance between points x and y. If a threshold value is provided, this function returns True\n    if the calculated distance is smaller than the threshold, or False otherwise. If no threshold is provided, this\n    function returns the computed distance.\n\n    :param x: n-dimension point\n    :param y: n-dimension point\n    :param thresh: threshold\n    :type thresh: float\n    :return: If thresh is not None: whether x and y are closer to each other than the threshold.\n    If thresh is None: the distance between x and y\n    \"\"\"\n\n    dist = np.sqrt(np.sum((x - y) ** 2))\n\n    if thresh is None:\n        return dist\n\n    return dist < thresh\n\n\ndef euler_to_quaternion(roll, pitch, yaw):\n    qx = np.sin(roll / 2) * np.cos(pitch / 2) * np.cos(yaw / 2) - np.cos(roll / 2) * np.sin(pitch / 2) * np.sin(yaw / 2)\n    qy = np.cos(roll / 2) * np.sin(pitch / 2) * np.cos(yaw / 2) + np.sin(roll / 2) * np.cos(pitch / 2) * np.sin(yaw / 2)\n    qz = np.cos(roll / 2) * np.cos(pitch / 2) * np.sin(yaw / 2) - np.sin(roll / 2) * np.sin(pitch / 2) * np.cos(yaw / 2)\n    qw = np.cos(roll / 2) * np.cos(pitch / 2) * np.cos(yaw / 2) + np.sin(roll / 2) * np.sin(pitch / 2) * np.sin(yaw / 2)\n\n    return np.array([qw, qx, qy, qz])\n\n\ndef quaternion_to_euler(q):\n    q = pyquaternion.Quaternion(w=q[0], x=q[1], y=q[2], z=q[3])\n    yaw, pitch, roll = q.yaw_pitch_roll\n    return [roll, pitch, yaw]\n\n\ndef unit_quat(q):\n    \"\"\"\n    Normalizes a quaternion to be unit modulus.\n    :param q: 4-dimensional numpy array or CasADi object\n    :return: the unit quaternion in the same data format as the original one\n    \"\"\"\n\n    if isinstance(q, np.ndarray):\n        # if (q == np.zeros(4)).all():\n        #     q = np.array([1, 0, 0, 0])\n        q_norm = np.sqrt(np.sum(q ** 2))\n    else:\n        q_norm = cs.sqrt(cs.sumsqr(q))\n    return 1 / q_norm * q\n\n\ndef v_dot_q(v, q):\n    rot_mat = q_to_rot_mat(q)\n    if isinstance(q, np.ndarray):\n        return rot_mat.dot(v)\n\n    return cs.mtimes(rot_mat, v)\n\n\ndef q_to_rot_mat(q):\n    qw, qx, qy, qz = q[0], q[1], q[2], q[3]\n\n    if isinstance(q, np.ndarray):\n        rot_mat = np.array([\n            [1 - 2 * (qy ** 2 + qz ** 2), 2 * (qx * qy - qw * qz), 2 * (qx * qz + qw * qy)],\n            [2 * (qx * qy + qw * qz), 1 - 2 * (qx ** 2 + qz ** 2), 2 * (qy * qz - qw * qx)],\n            [2 * (qx * qz - qw * qy), 2 * (qy * qz + qw * qx), 1 - 2 * (qx ** 2 + qy ** 2)]])\n\n    else:\n        rot_mat = cs.vertcat(\n            cs.horzcat(1 - 2 * (qy ** 2 + qz ** 2), 2 * (qx * qy - qw * qz), 2 * (qx * qz + qw * qy)),\n            cs.horzcat(2 * (qx * qy + qw * qz), 1 - 2 * (qx ** 2 + qz ** 2), 2 * (qy * qz - qw * qx)),\n            cs.horzcat(2 * (qx * qz - qw * qy), 2 * (qy * qz + qw * qx), 1 - 2 * (qx ** 2 + qy ** 2)))\n\n    return rot_mat\n\n\ndef q_dot_q(q, r):\n    \"\"\"\n    Applies the rotation of quaternion r to quaternion q. In order words, rotates quaternion q by r. Quaternion format:\n    wxyz.\n\n    :param q: 4-length numpy array or CasADi MX. Initial rotation\n    :param r: 4-length numpy array or CasADi MX. Applied rotation\n    :return: The quaternion q rotated by r, with the same format as in the input.\n    \"\"\"\n\n    qw, qx, qy, qz = q[0], q[1], q[2], q[3]\n    rw, rx, ry, rz = r[0], r[1], r[2], r[3]\n\n    t0 = rw * qw - rx * qx - ry * qy - rz * qz\n    t1 = rw * qx + rx * qw - ry * qz + rz * qy\n    t2 = rw * qy + rx * qz + ry * qw - rz * qx\n    t3 = rw * qz - rx * qy + ry * qx + rz * qw\n\n    if isinstance(q, np.ndarray):\n        return np.array([t0, t1, t2, t3])\n    else:\n        return cs.vertcat(t0, t1, t2, t3)\n\n\ndef rotation_matrix_to_quat(rot):\n    \"\"\"\n    Calculate a quaternion from a 3x3 rotation matrix.\n\n    :param rot: 3x3 numpy array, representing a valid rotation matrix\n    :return: a quaternion corresponding to the 3D rotation described by the input matrix. Quaternion format: wxyz\n    \"\"\"\n\n    q = pyquaternion.Quaternion(matrix=rot)\n    return np.array([q.w, q.x, q.y, q.z])\n\n\ndef undo_quaternion_flip(q_past, q_current):\n    \"\"\"\n    Detects if q_current generated a quaternion jump and corrects it. Requires knowledge of the previous quaternion\n    in the series, q_past\n    :param q_past: 4-dimensional vector representing a quaternion in wxyz form.\n    :param q_current: 4-dimensional vector representing a quaternion in wxyz form. Will be corrected if it generates\n    a flip wrt q_past.\n    :return: q_current with the flip removed if necessary\n    \"\"\"\n\n    if np.sqrt(np.sum((q_past - q_current) ** 2)) > np.sqrt(np.sum((q_past + q_current) ** 2)):\n        return -q_current\n    return q_current\n\n\ndef skew_symmetric(v):\n    \"\"\"\n    Computes the skew-symmetric matrix of a 3D vector (PAMPC version)\n\n    :param v: 3D numpy vector or CasADi MX\n    :return: the corresponding skew-symmetric matrix of v with the same data type as v\n    \"\"\"\n\n    if isinstance(v, np.ndarray):\n        return np.array([[0, -v[0], -v[1], -v[2]],\n                         [v[0], 0, v[2], -v[1]],\n                         [v[1], -v[2], 0, v[0]],\n                         [v[2], v[1], -v[0], 0]])\n\n    return cs.vertcat(\n        cs.horzcat(0, -v[0], -v[1], -v[2]),\n        cs.horzcat(v[0], 0, v[2], -v[1]),\n        cs.horzcat(v[1], -v[2], 0, v[0]),\n        cs.horzcat(v[2], v[1], -v[0], 0))\n\n\ndef decompose_quaternion(q):\n    \"\"\"\n    Decomposes a quaternion into a z rotation and an xy rotation\n    :param q: 4-dimensional numpy array of CasADi MX (format qw, qx, qy, qz)\n    :return: two 4-dimensional arrays (same format as input), where the first contains the xy rotation and the second\n    the z rotation, in quaternion forms.\n    \"\"\"\n\n    w, x, y, z = q[0], q[1], q[2], q[3]\n\n    if isinstance(q, cs.MX):\n        qz = unit_quat(cs.vertcat(w, 0, 0, z))\n    else:\n        qz = unit_quat(np.array([w, 0, 0, z]))\n    qxy = q_dot_q(q, quaternion_inverse(qz))\n\n    return qxy, qz\n\n\ndef quaternion_inverse(q):\n    w, x, y, z = q[0], q[1], q[2], q[3]\n\n    if isinstance(q, np.ndarray):\n        return np.array([w, -x, -y, -z])\n    else:\n        return cs.vertcat(w, -x, -y, -z)\n\n\ndef rotation_matrix_to_euler(r_mat):\n    sy = math.sqrt(r_mat[0, 0] * r_mat[0, 0] + r_mat[1, 0] * r_mat[1, 0])\n\n    singular = sy < 1e-6\n\n    if not singular:\n        x = math.atan2(r_mat[2, 1], r_mat[2, 2])\n        y = math.atan2(-r_mat[2, 0], sy)\n        z = math.atan2(r_mat[1, 0], r_mat[0, 0])\n    else:\n        x = math.atan2(-r_mat[1, 2], r_mat[1, 1])\n        y = math.atan2(-r_mat[2, 0], sy)\n        z = 0\n\n    return np.array([x, y, z])\n\n\ndef prune_dataset(x, y, x_cap, bins, thresh, plot, labels=None):\n    \"\"\"\n    Prunes the collected model error dataset with two filters. First, remove values where the input values (velocities)\n    exceed 10. Second, create an histogram for each of the three axial velocity errors (y) with the specified number of\n    bins and remove any data where the total amount of samples in that bin is less than the specified threshold ratio.\n    :param x: dataset of input GP features (velocities). Dimensions N x n (N entries and n dimensions)\n    :param y: dataset of errors. Dimensions N x m (N entries and m dimensions)\n    :param x_cap: remove values from dataset if x > x_cap or x < -x_cap\n    :param bins: number of bins used for histogram\n    :param thresh: threshold ratio below which data from that bin will be removed\n    :param plot: make a plot of the pruning\n    :param labels: Labels to use for the plot\n    :return: The indices kept after the pruning\n    \"\"\"\n\n    n_bins = bins\n    original_length = x.shape[0]\n\n    plot_bins = []\n    if plot:\n        plt.figure()\n        for i in range(y.shape[1]):\n            plt.subplot(y.shape[1] + 1, 1, i + 1)\n            h, bins = np.histogram(y[:, i], bins=n_bins)\n            plot_bins.append(bins)\n            plt.bar(bins[:-1], h, np.ones_like(h) * (bins[1] - bins[0]), align='edge', label='discarded')\n            if labels is not None:\n                plt.ylabel(labels[i])\n\n    pruned_idx_unique = np.zeros(0, dtype=int)\n\n    # Prune velocities (max axial velocity = x_cap m/s).\n    if x_cap is not None:\n        for i in range(x.shape[1]):\n            pruned_idx = np.where(np.abs(x[:, i]) > x_cap)[0]\n            pruned_idx_unique = np.unique(np.append(pruned_idx, pruned_idx_unique))\n\n    # Prune by error histogram dimension wise (discard bins with less than 1% of the data)\n    for i in range(y.shape[1]):\n        h, bins = np.histogram(y[:, i], bins=n_bins)\n        for j in range(len(h)):\n            if h[j] / np.sum(h) < thresh:\n                pruned_idx = np.where(np.logical_and(bins[j + 1] >= y[:, i], y[:, i] >= bins[j]))\n                pruned_idx_unique = np.unique(np.append(pruned_idx, pruned_idx_unique))\n\n    y_norm = np.sqrt(np.sum(y ** 2, 1))\n\n    # Prune by error histogram norm\n    h, error_bins = np.histogram(y_norm, bins=n_bins)\n    h = h / np.sum(h)\n    if plot:\n        plt.subplot(y.shape[1] + 1, 1, y.shape[1] + 1)\n        plt.bar(error_bins[:-1], h, np.ones_like(h) * (error_bins[1] - error_bins[0]), align='edge', label='discarded')\n\n    for j in range(len(h)):\n        if h[j] / np.sum(h) < thresh:\n            pruned_idx = np.where(np.logical_and(error_bins[j + 1] >= y_norm, y_norm >= error_bins[j]))\n            pruned_idx_unique = np.unique(np.append(pruned_idx, pruned_idx_unique))\n\n    y = np.delete(y, pruned_idx_unique, axis=0)\n\n    if plot:\n        for i in range(y.shape[1]):\n            plt.subplot(y.shape[1] + 1, 1, i + 1)\n            h, bins = np.histogram(y[:, i], bins=plot_bins[i])\n            plt.bar(bins[:-1], h, np.ones_like(h) * (bins[1] - bins[0]), align='edge', label='kept')\n            plt.legend()\n\n        plt.subplot(y.shape[1] + 1, 1, y.shape[1] + 1)\n        h, bins = np.histogram(np.sqrt(np.sum(y ** 2, 1)), bins=error_bins)\n        h = h / sum(h)\n        plt.bar(bins[:-1], h, np.ones_like(h) * (bins[1] - bins[0]), align='edge', label='kept')\n        plt.ylabel('Error norm')\n\n    kept_idx = np.delete(np.arange(0, original_length), pruned_idx_unique)\n    return kept_idx\n\n\ndef distance_maximizing_points_1d(points, n_train_points, dense_gp=None):\n    \"\"\"\n    Heuristic function for sampling training points in 1D (one input feature and one output prediction dimensions)\n    :param points: dataset points for the current cluster. Array of shape Nx1\n    :param n_train_points: Integer. number of training points to sample.\n    :param dense_gp: A GP object to sample the points from, or None of the points will be taken directly from the data.\n    :return:\n    \"\"\"\n\n    closest_points = np.zeros(n_train_points, dtype=int if dense_gp is None else float)\n\n    if dense_gp is not None:\n        n_train_points -= 1\n\n    # Fit histogram in data with as many bins as the number of training points\n    a, b = np.histogram(points, bins=n_train_points)\n    hist_indices = np.digitize(points, b) - 1\n\n    # Pick as training value the median or mean value of each bin\n    for i in range(n_train_points):\n        bin_values = points[np.where(hist_indices == i)]\n        if len(bin_values) < 1:\n            closest_points[i] = np.random.choice(np.arange(len(points)), 1)\n            continue\n        if divmod(len(bin_values), 2)[1] == 0:\n            bin_values = bin_values[:-1]\n\n        if dense_gp is None:\n            # If no dense GP, sample median points in each bin from training set\n            bin_median = np.median(bin_values)\n            median_point_id = np.where(points == bin_median)[0]\n            if len(median_point_id) > 1:\n                closest_points[i] = median_point_id[0]\n            else:\n                closest_points[i] = median_point_id\n        else:\n            # If with GP, sample mean points in each bin from GP\n            bin_mean = np.min(bin_values)\n            closest_points[i] = bin_mean\n\n    if dense_gp is not None:\n        # Add dimension axis 0\n        closest_points[-1] = np.max(points)\n        closest_points = closest_points[np.newaxis, :]\n\n    return closest_points\n\n\ndef distance_maximizing_points_2d(points, n_train_points, dense_gp, plot=False):\n    if n_train_points > 30:\n        n_clusters = max(int(n_train_points / 10), 30)\n        n_samples = int(np.floor(n_train_points / n_clusters))\n    else:\n        n_clusters = n_train_points\n        n_samples = 1\n\n    kmeans = KMeans(n_clusters).fit_predict(points)\n\n    closest_points = []\n    for i in range(n_clusters):\n        closest_points += np.random.choice(np.where(kmeans == i)[0], n_samples).tolist()\n\n    # Remove exceeding points\n    for _ in range(len(closest_points) - n_train_points):\n        rnd_item = random.choice(closest_points)\n        closest_points.remove(rnd_item)\n    closest_points = np.array(closest_points)\n\n    if plot:\n        plt.figure()\n        for i in range(n_clusters):\n            cluster_points = points[np.where(kmeans == i)]\n            plt.scatter(cluster_points[:, 0], cluster_points[:, 1])\n        plt.scatter(points[closest_points, 0], points[closest_points, 1], marker='*', facecolors='w',\n                    edgecolors='k', s=100, label='selected')\n        plt.legend()\n        plt.show()\n\n    if dense_gp is None:\n        return closest_points\n\n    closest_points = points[closest_points].T\n    return closest_points\n\n\ndef distance_maximizing_points(x_values, center, n_train_points=7, dense_gp=None, plot=False):\n    if x_values.shape[1] == 1:\n        return distance_maximizing_points_1d(x_values, n_train_points, dense_gp)\n\n    if x_values.shape[1] >= 2:\n        return distance_maximizing_points_2d(x_values, n_train_points, dense_gp, plot)\n\n    # Compute PCA of data to find variability maximizing axes\n    pca = PCA(n_components=3)\n    pca.fit(x_values)\n    pca_axes = pca.components_\n    data_center = center\n\n    # Apply PCA transformation\n    points_pca = (x_values - data_center).dot(pca_axes.T)\n    center = (center - data_center).dot(pca_axes.T)\n\n    # Compute the corners of the cube containing the data in the PCA space\n    p_min = center - (center - np.min(points_pca, 0))\n    p_max = (np.max(points_pca, 0) - center) + center\n\n    centroids = np.array([[center[0], center[1], center[2]]])\n\n    pyramids = np.array([[p_max[0], center[1], center[2]], [center[0], p_max[1], center[2]],\n                         [center[0], center[1], p_max[2]], [p_min[0], center[1], center[2]],\n                         [center[0], p_min[1], center[2]], [center[0], center[1], p_min[2]]])\n\n    cuboid = np.array([[p_max[0], p_max[1], p_max[2]], [p_max[0], p_max[1], p_min[2]],\n                       [p_max[0], p_min[1], p_max[2]], [p_max[0], p_min[1], p_min[2]],\n                       [p_min[0], p_max[1], p_max[2]], [p_min[0], p_max[1], p_min[2]],\n                       [p_min[0], p_min[1], p_max[2]], [p_min[0], p_min[1], p_min[2]]])\n\n    if n_train_points >= 15:\n        centroids = np.concatenate((centroids, pyramids, cuboid), axis=0)\n    elif n_train_points >= 9:\n        centroids = np.concatenate((centroids, cuboid), axis=0)\n    elif n_train_points >= 7:\n        centroids = np.concatenate((centroids, pyramids), axis=0)\n    else:\n        centroids = centroids\n\n    if dense_gp is None:\n\n        closest_points = np.ones(centroids.shape[0], dtype=int) * -1\n\n        # Find the closest points to all centroids\n        for i in range(centroids.shape[0]):\n            centroid = centroids[i, :]\n            dist = np.sqrt(np.sum((points_pca - centroid) ** 2, 1))\n            closest_point = int(np.argmin(dist))\n\n            # Assert no repeated points. If repeated, find next closest one and check again\n            while closest_point in closest_points:\n                dist[closest_point] = np.inf\n                closest_point = int(np.argmin(dist))\n            closest_points[i] = closest_point\n\n        # Convert centroids to data space\n        centroids_ = centroids.dot(pca_axes) + data_center\n\n        if not plot:\n            return closest_points\n\n        fig = plt.figure()\n\n        ax = fig.add_subplot(122, projection='3d')\n        ax.scatter(points_pca[:, 0], points_pca[:, 1], points_pca[:, 2], 'b', label='data')\n\n        ax.scatter(centroids[0, 0], centroids[0, 1], centroids[0, 2], s=50, label='center')\n        ax.scatter(centroids[1:, 0], centroids[1:, 1], centroids[1:, 2], s=50, label='centroids')\n        ax.scatter(points_pca[closest_points, 0], points_pca[closest_points, 1], points_pca[closest_points, 2],\n                   s=50, label='selected')\n        ax.set_title('PCA space')\n        ax.legend()\n\n        ax = fig.add_subplot(121, projection='3d')\n        ax.scatter(x_values[:, 0], x_values[:, 1], x_values[:, 2], 'b', label='data')\n\n        closest_points_x = x_values[closest_points]\n        ax.scatter(centroids_[0, 0], centroids_[0, 1], centroids_[0, 2], s=50, label='center')\n        ax.scatter(centroids_[1:, 0], centroids_[1:, 1], centroids_[1:, 2], s=50, label='centroids')\n        ax.scatter(closest_points_x[:, 0], closest_points_x[:, 1], closest_points_x[:, 2], s=50, label='selected')\n        ax.set_title('Data space')\n        ax.legend()\n        plt.show()\n\n        # Returns the indices of the closest points from the dataset to the ideal ones\n        return closest_points\n\n    else:\n\n        # TODO: Use GP covariance for sampling\n        # Convert centroids to data space\n        centroids_ = centroids.dot(pca_axes) + data_center\n\n        return centroids_.T\n\n\ndef sample_random_points(points, used_idx, points_to_sample, dense_gp=None):\n    bins = min(10, int(len(points) / 10))\n    bins = max(bins, 2)\n\n    # Add remaining points as random points\n    free_points = np.arange(0, points.shape[0], 1)\n    gp_i_free_points = np.delete(free_points, used_idx)\n    n_samples = min(points_to_sample, len(gp_i_free_points))\n\n    # Compute histogram of data\n    a, b = np.histogramdd(points[gp_i_free_points, :], bins)\n    assignments = [np.minimum(np.digitize(points[gp_i_free_points, j], bins=b[j]) - 1, bins - 1)\n                   for j in range(points.shape[1])]\n\n    # Compute probability distribution based on inverse histogram\n    probs = np.max(a) - a[tuple(assignments)]\n    probs = probs / sum(probs)\n\n    try:\n        gp_i_free_points = np.random.choice(gp_i_free_points, n_samples, p=probs, replace=False)\n    except ValueError:\n        print('a')\n    used_idx = np.append(used_idx, gp_i_free_points)\n\n    return used_idx\n\n\ndef parse_xacro_file(xacro):\n    \"\"\"\n    Reads a .xacro file describing a robot for Gazebo and returns a dictionary with its properties.\n    :param xacro: full path of .xacro file to read\n    :return: a dictionary of robot attributes\n    \"\"\"\n\n    tree = XMLtree.parse(xacro)\n\n    attrib_dict = {}\n\n    for node in tree.getroot().getchildren():\n        # Get attributes\n        attributes = node.attrib\n\n        if 'value' in attributes.keys():\n            attrib_dict[attributes['name']] = attributes['value']\n\n        if node.getchildren():\n            try:\n                attrib_dict[attributes['name']] = [child.attrib for child in node.getchildren()]\n            except:\n                continue\n\n    return attrib_dict\n\n\ndef make_bx_matrix(x_dims, y_feats):\n    \"\"\"\n    Generates the Bx matrix for the GP augmented MPC.\n\n    :param x_dims: dimensionality of the state vector\n    :param y_feats: array with the indices of the state vector x that are augmented by the GP regressor\n    :return: The Bx matrix to map the GP output to the state space.\n    \"\"\"\n\n    bx = np.zeros((x_dims, len(y_feats)))\n    for i in range(len(y_feats)):\n        bx[y_feats[i], i] = 1\n\n    return bx\n\n\ndef make_bz_matrix(x_dims, u_dims, x_feats, u_feats):\n    \"\"\"\n    Generates the Bz matrix for the GP augmented MPC.\n    :param x_dims: dimensionality of the state vector\n    :param u_dims: dimensionality of the input vector\n    :param x_feats: array with the indices of the state vector x used to make the first part of the GP feature vector z\n    :param u_feats: array with the indices of the input vector u used to make the second part of the GP feature vector z\n    :return:  The Bz matrix to map from input x and u features to the z feature vector.\n    \"\"\"\n\n    bz = np.zeros((len(x_feats), x_dims))\n    for i in range(len(x_feats)):\n        bz[i, x_feats[i]] = 1\n    bzu = np.zeros((len(u_feats), u_dims))\n    for i in range(len(u_feats)):\n        bzu[i, u_feats[i]] = 1\n    bz = np.concatenate((bz, np.zeros((len(x_feats), u_dims))), axis=1)\n    bzu = np.concatenate((np.zeros((len(u_feats), x_dims)), bzu), axis=1)\n    bz = np.concatenate((bz, bzu), axis=0)\n    return bz\n\n\ndef quaternion_state_mse(x, x_ref, mask):\n    \"\"\"\n    Calculates the MSE of the 13-dimensional state (p_xyz, q_wxyz, v_xyz, r_xyz) wrt. the reference state. The MSE of\n    the quaternions are treated axes-wise.\n\n    :param x: 13-dimensional state\n    :param x_ref: 13-dimensional reference state\n    :param mask: 12-dimensional masking for weighted MSE (p_xyz, q_xyz, v_xyz, r_xyz)\n    :return: the mean squared error of both\n    \"\"\"\n\n    q_error = q_dot_q(x[3:7], quaternion_inverse(x_ref[3:7]))\n    e = np.concatenate((x[:3] - x_ref[:3], q_error[1:], x[7:10] - x_ref[7:10], x[10:] - x_ref[10:]))\n\n    return np.sqrt((e * np.array(mask)).dot(e))\n\n\ndef separate_variables(traj):\n    \"\"\"\n    Reshapes a trajectory into expected format.\n\n    :param traj: N x 13 array representing the reference trajectory\n    :return: A list with the components: Nx3 position trajectory array, Nx4 quaternion trajectory array, Nx3 velocity\n    trajectory array, Nx3 body rate trajectory array\n    \"\"\"\n\n    p_traj = traj[:, :3]\n    a_traj = traj[:, 3:7]\n    v_traj = traj[:, 7:10]\n    r_traj = traj[:, 10:]\n    return [p_traj, a_traj, v_traj, r_traj]\n"
  },
  {
    "path": "ros_dd_mpc/src/utils/visualization.py",
    "content": "\"\"\" Miscellaneous visualization functions.\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free Software\nFoundation, either version 3 of the License, or (at your option) any later\nversion.\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nimport json\n# import tikzplotlib\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.animation as animation\nfrom matplotlib.colors import LinearSegmentedColormap, BoundaryNorm\nfrom matplotlib.colorbar import ColorbarBase\nfrom matplotlib import cm\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom config.configuration_parameters import DirectoryConfig as PathConfig\nfrom src.utils.utils import v_dot_q, quaternion_to_euler, quaternion_inverse, q_dot_q, safe_mknode_recursive, \\\n    safe_mkdir_recursive\nimport os\n\n\ndef angle_to_rot_mat(angle):\n    \"\"\"\n    Computes the 2x2 rotation matrix from the scalar angle\n    :param angle: scalar angle in radians\n    :return: the corresponding 2x2 rotation matrix\n    \"\"\"\n\n    s = np.sin(angle)\n    c = np.cos(angle)\n    return np.array([[c, -s], [s, c]])\n\n\ndef draw_arrow(x_base, y_base, x_body, y_body):\n    \"\"\"\n    Returns the coordinates for drawing a 2D arrow given its origin point and its length.\n    :param x_base: x coordinate of the arrow origin\n    :param y_base: y coordinate of the arrow origin\n    :param x_body: x length of the arrow\n    :param y_body: y length of the arrow\n    :return: a tuple of x, y coordinates to plot the arrow\n    \"\"\"\n\n    len_arrow = np.sqrt(x_body ** 2 + y_body ** 2)\n    beta = np.arctan2(y_body, x_body)\n    beta_rot = angle_to_rot_mat(beta)\n    lower_arrow = beta_rot.dot(np.array([[-np.cos(np.pi / 6)], [-np.sin(np.pi / 6)]]) * len_arrow / 3)\n    upper_arrow = beta_rot.dot(np.array([[-np.cos(np.pi / 6)], [np.sin(np.pi / 6)]]) * len_arrow / 3)\n\n    return ([x_base, x_base + x_body, x_base + x_body + lower_arrow[0, 0],\n             x_base + x_body, x_base + x_body + upper_arrow[0, 0]],\n            [y_base, y_base + y_body, y_base + y_body + lower_arrow[1, 0],\n             y_base + y_body, y_base + y_body + upper_arrow[1, 0]])\n\n\ndef draw_drone(pos, q_rot, x_f, y_f):\n\n    # Define quadrotor extremities in body reference frame\n    x1 = np.array([x_f[0], y_f[0], 0])\n    x2 = np.array([x_f[1], y_f[1], 0])\n    x3 = np.array([x_f[2], y_f[2], 0])\n    x4 = np.array([x_f[3], y_f[3], 0])\n\n    # Convert to world reference frame and add quadrotor center point:\n    x1 = v_dot_q(x1, q_rot) + pos\n    x2 = v_dot_q(x2, q_rot) + pos\n    x3 = v_dot_q(x3, q_rot) + pos\n    x4 = v_dot_q(x4, q_rot) + pos\n\n    # Build set of coordinates for plotting\n    return ([x1[0], x3[0], pos[0], x2[0], x4[0]],\n            [x1[1], x3[1], pos[1], x2[1], x4[1]],\n            [x1[2], x3[2], pos[2], x2[2], x4[2]])\n\n\ndef draw_covariance_ellipsoid(center, covar):\n    \"\"\"\n    :param center: 3-dimensional array. Center of the ellipsoid\n    :param covar: 3x3 covariance matrix. If the covariance is diagonal, the ellipsoid will have radii equal to the\n    three diagonal axis along axes x, y, z respectively.\n    :return:\n    \"\"\"\n\n    # find the rotation matrix and radii of the axes\n    _, radii, rotation = np.linalg.svd(covar)\n\n    # now carry on with EOL's answer\n    u = np.linspace(0.0, 2.0 * np.pi, 20)\n    v = np.linspace(0.0, np.pi, 20)\n    x = radii[0] * np.outer(np.cos(u), np.sin(v))\n    y = radii[1] * np.outer(np.sin(u), np.sin(v))\n    z = radii[2] * np.outer(np.ones_like(u), np.cos(v))\n    for i in range(len(x)):\n        for j in range(len(x)):\n            [x[i, j], y[i, j], z[i, j]] = np.dot([x[i, j], y[i, j], z[i, j]], rotation) + center\n\n    x = np.reshape(x, -1)\n    y = np.reshape(y, -1)\n    z = np.reshape(z, -1)\n    return x, y, z\n\n\ndef visualize_data_distribution(x_data, y_data, clusters, x_pruned, y_pruned):\n    \"\"\"\n    Visualizes the distribution of the training dataset and the assignation of the GP prediction clusters.\n    :param x_data: numpy array of shape N x 3, where N is the number of training points. Feature variables.\n    :param y_data: numpy array of shape N x 3, where N is the number of training points. Regressed variables.\n    :param x_pruned: numpy array of shape M x 3, where M is the number of pruned training points. Feature variables.\n    :param y_pruned: numpy array of shape M x 3, where M is the number of pruned training points. Regressed variables.\n    :param clusters: A dictionary where each entry is indexed by the cluster number, and contains a list of all the\n    indices of the points in x_pruned belonging to that cluster.\n    \"\"\"\n\n    if x_data.shape[1] < 3:\n        return\n\n    fig = plt.figure()\n    ax = fig.add_subplot(131, projection='3d')\n    c = np.sqrt(np.sum(y_data ** 2, 1))\n    scatter = ax.scatter(x_data[:, 0], x_data[:, 1], x_data[:, 2], c=c, alpha=0.6)\n    ax.set_title('Raw data: Correction magnitude')\n    ax.set_xlabel(r'$v_x\\: [m/s]$')\n    ax.set_ylabel(r'$v_y\\: [m/s]$')\n    ax.set_zlabel(r'$v_z\\: [m/s]$')\n    fig.colorbar(scatter, ax=ax, orientation='vertical', shrink=0.75)\n\n    ax = fig.add_subplot(132, projection='3d')\n    c = np.sqrt(np.sum(y_pruned ** 2, 1))\n    scatter = ax.scatter(x_pruned[:, 0], x_pruned[:, 1], x_pruned[:, 2], c=c, alpha=0.6)\n    ax.set_title('Pruned data: Correction magnitude')\n    ax.set_xlabel(r'$v_x\\: [m/s]$')\n    ax.set_ylabel(r'$v_y\\: [m/s]$')\n    ax.set_zlabel(r'$v_z\\: [m/s]$')\n    fig.colorbar(scatter, ax=ax, orientation='vertical', shrink=0.75)\n\n    n_clusters = len(clusters.keys())\n\n    ax = fig.add_subplot(133, projection='3d')\n    for i in range(int(n_clusters)):\n        ax.scatter(x_pruned[clusters[i], 0], x_pruned[clusters[i], 1], x_pruned[clusters[i], 2], alpha=0.6)\n    ax.set_title('Cluster assignations')\n    ax.set_xlabel(r'$v_x\\: [m/s]$')\n    ax.set_ylabel(r'$v_y\\: [m/s]$')\n    ax.set_zlabel(r'$v_z\\: [m/s]$')\n\n    plt.show()\n\n\ndef visualize_gp_inference(x_data, u_data, y_data, gp_ensemble, vis_features_x, y_dims, labels):\n    # WARNING: This function is extremely limited to the case where the regression is performed using just the\n    # velocity state as input features and as output dimensions.\n\n    predictions = gp_ensemble.predict(x_data.T, u_data.T)\n    predictions = np.atleast_2d(np.atleast_2d(predictions[\"pred\"])[y_dims])\n\n    if len(vis_features_x) > 1:\n        y_pred = np.sqrt(np.sum(predictions ** 2, 0))\n        y_mse = np.sqrt(np.sum(y_data ** 2, 1))\n    else:\n        y_pred = predictions[0, :]\n        y_mse = y_data[:, 0]\n\n    v_min = min(np.min(y_pred), np.min(y_mse))\n    v_max = max(np.max(y_pred), np.max(y_mse))\n\n    fig = plt.figure()\n\n    font_size = 16\n\n    if len(vis_features_x) == 1:\n        # Feature dimension is only 1\n\n        # Compute windowed average\n        n_bins = 20\n        _, b = np.histogram(x_data[:, vis_features_x], bins=n_bins)\n        hist_indices = np.digitize(x_data[:, vis_features_x], b)\n        win_average = np.zeros(n_bins)\n        for i in range(n_bins):\n            win_average[i] = np.mean(y_mse[np.where(hist_indices == i + 1)[0]])\n        bin_midpoints = b[:-1] + np.diff(b)[0] / 2\n\n        ax = [fig.add_subplot(121), fig.add_subplot(122)]\n\n        ax[0].scatter(x_data[:, vis_features_x], y_mse)\n        ax[0].set_xlabel(labels[0])\n        ax[0].set_ylabel('RMSE')\n        ax[0].set_title('Post-processed dataset')\n\n        ax[1].scatter(x_data[:, vis_features_x], y_pred, label='GP')\n        ax[1].plot(bin_midpoints, win_average, label='window average')\n        ax[1].set_xlabel(labels[0])\n        ax[1].set_title('Predictions')\n        ax[1].legend()\n\n        return\n\n    elif len(vis_features_x) >= 3:\n        ax = [fig.add_subplot(121, projection='3d'), fig.add_subplot(122, projection='3d')]\n        im = ax[0].scatter(x_data[:, vis_features_x[0]], x_data[:, vis_features_x[1]], x_data[:, vis_features_x[2]], c=y_mse,\n                           cmap='viridis', alpha=0.6, vmin=v_min, vmax=v_max)\n        ax[0].set_xlabel(labels[0], size=font_size - 4, labelpad=10)\n        ax[0].set_ylabel(labels[1], size=font_size - 4, labelpad=10)\n        ax[0].set_zlabel(labels[2], size=font_size - 4, labelpad=10)\n        ax[0].set_title(r'Nominal MPC error $\\|\\mathbf{a}^e\\|$', size=font_size)\n        ax[0].view_init(65, 15)\n\n        ax[1].scatter(x_data[:, vis_features_x[0]], x_data[:, vis_features_x[1]], x_data[:, vis_features_x[2]], c=y_pred,\n                      cmap='viridis', alpha=0.6, vmin=v_min, vmax=v_max)\n        ax[1].set_xlabel(labels[0], size=font_size - 4, labelpad=10)\n        ax[1].set_ylabel(labels[1], size=font_size - 4, labelpad=10)\n        ax[1].set_zlabel(labels[2], size=font_size - 4, labelpad=10)\n        ax[1].set_title(r'GP prediction mangnitude $\\|\\tilde{\\mathbf{a}}^e\\|$', size=font_size)\n        ax[1].view_init(65, 15)\n\n        plt.tight_layout()\n        fig.subplots_adjust(right=0.85)\n        cbar = fig.add_axes([0.90, 0.05, 0.03, 0.8])\n        fig.colorbar(im, cax=cbar)\n        cbar.get_yaxis().labelpad = 15\n        cbar.set_ylabel(r'$\\|\\mathbf{a}^e\\|\\left[\\frac{m}{s^2}\\right]$', size=font_size, labelpad=20, rotation=270)\n        cbar.tick_params(labelsize=font_size - 4)\n\n    # Create values for the regressed variables\n    x = np.linspace(min(x_data[:, vis_features_x[0]]), max(x_data[:, vis_features_x[0]]), 100)\n    y = np.linspace(min(x_data[:, vis_features_x[1]]), max(x_data[:, vis_features_x[1]]), 100)\n    # x = np.linspace(-8, 8, 50)\n    # y = np.linspace(-8, 8, 50)\n    x_mesh, y_mesh = np.meshgrid(x, y)\n    x = np.reshape(x_mesh, (-1, 1))\n    y = np.reshape(y_mesh, (-1, 1))\n    z = np.zeros_like(x)\n    x_sample = np.concatenate((x, y, z), 1)\n\n    # Generate complete mock x features. Only vis_features are non-zero\n    x_mock = np.tile(np.zeros_like(z), (1, x_data.shape[1]))\n    x_mock[:, np.array(vis_features_x)] = x_sample\n\n    # Also create mock u features\n    u_mock = np.tile(np.zeros_like(z), (1, u_data.shape[1]))\n\n    if len(vis_features_x) != 3:\n        plt.show()\n        return\n\n    # Generate animated plot showing prediction of the multiple clusters.\n    # Cluster coloring only possible if all the output dimensions have exactly the same clusters.\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    print(\"Grid sampling...\")\n    outs = gp_ensemble.predict(x_mock.T, u_mock.T, return_gp_id=True, progress_bar=True)\n    y_pred = np.atleast_2d(np.atleast_2d(outs[\"pred\"])[y_dims])\n    gp_ids = outs[\"gp_id\"]\n    y_sample = np.sqrt(np.sum(y_pred ** 2, 0))\n    y_sample = np.reshape(y_sample, x_mesh.shape)\n\n    gp_ids = np.reshape(gp_ids[next(iter(gp_ids))], x_mesh.shape)\n\n    def init():\n        # create the new map\n        cmap = cm.get_cmap('jet')\n        cmaplist = [cmap(j) for j in range(cmap.N)]\n        cmap = LinearSegmentedColormap.from_list('Custom cmap', cmaplist, cmap.N)\n\n        # define the bins and normalize\n        capped_n_clusters = min(np.amax(gp_ids) + 2, 20)\n        bounds = np.linspace(0, np.amax(gp_ids) + 1, capped_n_clusters)\n        norm = BoundaryNorm(bounds, cmap.N)\n\n        my_col = cm.get_cmap('jet')(gp_ids / (np.amax(gp_ids) + 1))\n\n        ax.plot_surface(x_mesh, y_mesh, y_sample, facecolors=my_col, linewidth=0, rstride=1, cstride=1,\n                        antialiased=False, alpha=0.7, cmap=cmap, norm=norm)\n        ax2 = fig.add_axes([0.90, 0.2, 0.03, 0.6])\n        ColorbarBase(ax2, cmap=cmap, norm=norm, spacing='proportional', ticks=bounds, boundaries=bounds, format='%1i')\n        ax2.set_ylabel('Cluster assignment ID', size=14)\n        ax2.tick_params(labelsize=16)\n\n        ax.tick_params(labelsize=14)\n        ax.set_xlabel(labels[0], size=16, labelpad=10)\n        ax.set_ylabel(labels[1], size=16, labelpad=10)\n        ax.set_zlabel(r'$\\|\\tilde{\\mathbf{a}}^e\\|\\: \\left[\\frac{m}{s^2}\\right]$', size=16, labelpad=10)\n        ax.set_title(r'GP correction. Slice $v_z=0 \\:\\: \\left[\\frac{m}{s}\\right]$', size=18)\n        return fig,\n\n    def animate(i):\n        ax.view_init(elev=30., azim=i*3)\n        return fig\n\n    _ = animation.FuncAnimation(fig, animate, init_func=init, frames=360, interval=20, blit=False)\n\n    plt.show()\n\n\ndef initialize_drone_plotter(world_rad, quad_rad, n_props, full_traj=None):\n\n    fig = plt.figure(figsize=(10, 10), dpi=96)\n    fig.show()\n\n    mng = plt.get_current_fig_manager()\n    mng.resize(*mng.window.maxsize())\n\n    ax = fig.add_subplot(111, projection='3d')\n\n    if full_traj is not None:\n        ax.plot(full_traj[:, 0], full_traj[:, 1], full_traj[:, 2], '--', color='tab:blue', alpha=0.5)\n        ax.set_xlim([ax.get_xlim()[0] - 2 * quad_rad, ax.get_xlim()[1] + 2 * quad_rad])\n        ax.set_ylim([ax.get_ylim()[0] - 2 * quad_rad, ax.get_ylim()[1] + 2 * quad_rad])\n        ax.set_zlim([ax.get_zlim()[0] - 2 * quad_rad, ax.get_zlim()[1] + 2 * quad_rad])\n    else:\n        ax.set_xlim([-world_rad, world_rad])\n        ax.set_ylim([-world_rad, world_rad])\n        ax.set_zlim([-world_rad, world_rad])\n\n    ax.set_xlabel('x [m]')\n    ax.set_ylabel('y [m]')\n    ax.set_zlabel('z [m]')\n    fig.canvas.draw()\n    plt.draw()\n\n    # cache the background\n    background = fig.canvas.copy_from_bbox(ax.bbox)\n\n    artists = {\n        \"trajectory\": ax.plot([], [])[0], \"drone\": ax.plot([], [], 'o-')[0],\n        \"drone_x\": ax.plot([], [], 'o-', color='r')[0],\n        \"missing_targets\": ax.plot([], [], [], color='r', marker='o', linestyle='None', markersize=12)[0],\n        \"reached_targets\": ax.plot([], [], [], color='g', marker='o', linestyle='None', markersize=12)[0],\n        \"sim_trajectory\": [ax.plot([], [], [], '-', color='tab:blue', alpha=0.9 - i * 0.2 / n_props)[0]\n                           for i in range(n_props)],\n        \"int_trajectory\": [ax.plot([], [], [], '-', color='tab:orange', alpha=0.9 - i * 0.5 / n_props)[0]\n                           for i in range(n_props + 1)],\n        \"prop_trajectory\": [ax.plot([], [], [], '-', color='tab:red', alpha=0.9 - i * 0.2 / n_props)[0]\n                            for i in range(n_props)],\n        \"prop_covariance\": [ax.plot([], [], [], color='r', alpha=0.5 - i * 0.45 / n_props)[0]\n                            for i in range(n_props)],\n        \"projection_lines\": [ax.plot([], [], [], '-', color='tab:blue', alpha=0.2)[0],\n                             ax.plot([], [], [], '-', color='tab:blue', alpha=0.2)[0]],\n        \"projection_target\": [ax.plot([], [], [], marker='o', color='r', linestyle='None', alpha=0.2)[0],\n                              ax.plot([], [], [], marker='o', color='r', linestyle='None', alpha=0.2)[0]]}\n\n    art_pack = fig, ax, artists, background, world_rad\n    return art_pack\n\n\ndef draw_drone_simulation(art_pack, x_trajectory, quad, targets, targets_reached, sim_traj=None,\n                          int_traj=None, pred_traj=None, x_pred_cov=None, follow_quad=False):\n\n    fig, ax, artists, background, world_rad = art_pack\n\n    trajectories_artist = artists[\"trajectory\"] if \"trajectory\" in artists.keys() else []\n    projected_traj_artists = artists[\"projection_lines\"] if \"projection_lines\" in artists.keys() else []\n    drone_sketch_artist = artists[\"drone\"] if \"drone\" in artists.keys() else []\n    drone_sketch_artist_x_motor = artists[\"drone_x\"] if \"drone_x\" in artists.keys() else[]\n    targets_artist = artists[\"missing_targets\"] if \"missing_targets\" in artists.keys() else []\n    reached_targets_artist = artists[\"reached_targets\"] if \"reached_targets\" in artists.keys() else []\n    projected_tar_artists = artists[\"projection_target\"] if \"projection_target\" in artists.keys() else []\n    sim_traj_artists = artists[\"sim_trajectory\"] if \"sim_trajectory\" in artists.keys() else []\n    int_traj_artists = artists[\"int_trajectory\"] if \"int_trajectory\" in artists.keys() else []\n    pred_traj_artists = artists[\"prop_trajectory\"] if \"prop_trajectory\" in artists.keys() else []\n    cov_artists = artists[\"prop_covariance\"] if \"prop_covariance\" in artists.keys() else []\n\n    # restore background\n    fig.canvas.restore_region(background)\n\n    def draw_fading_traj(traj, traj_artists):\n        traj = np.squeeze(np.array(traj))\n        for j in range(min(traj.shape[0] - 1, len(traj_artists))):\n            traj_artists[j].set_data([traj[j, 0], traj[j + 1, 0]], [traj[j, 1], traj[j + 1, 1]])\n            traj_artists[j].set_3d_properties([traj[j, 2], traj[j + 1, 2]])\n\n    # Draw missing and reached targets\n    if targets is not None and targets_reached is not None:\n        reached = targets[targets_reached, :]\n        reached = reached[-2:, :]\n        missing = targets[targets_reached == False, :]\n        reached_targets_artist.set_data(reached[:, 0], reached[:, 1])\n        reached_targets_artist.set_3d_properties(reached[:, 2])\n        targets_artist.set_data(missing[:, 0], missing[:, 1])\n        targets_artist.set_3d_properties(missing[:, 2])\n        ax.draw_artist(targets_artist)\n        ax.draw_artist(reached_targets_artist)\n\n        # Draw projected target\n        if missing.any():\n            projected_tar_artists[0].set_data([missing[0, 0]], [ax.get_ylim()[1]])\n            projected_tar_artists[0].set_3d_properties([missing[0, 2]])\n            projected_tar_artists[1].set_data([ax.get_xlim()[0]], [missing[0, 1]])\n            projected_tar_artists[1].set_3d_properties([missing[0, 2]])\n            [ax.draw_artist(projected_tar_artist) for projected_tar_artist in projected_tar_artists]\n\n    # Draw quadrotor trajectory\n    trajectory_start_pt = max(len(x_trajectory) - 100, 0)\n    trajectories_artist.set_data(x_trajectory[trajectory_start_pt:, 0], x_trajectory[trajectory_start_pt:, 1])\n    trajectories_artist.set_3d_properties(x_trajectory[trajectory_start_pt:, 2])\n    ax.draw_artist(trajectories_artist)\n\n    # Draw projected trajectory\n    projected_traj_artists[0].set_data(x_trajectory[trajectory_start_pt:, 0], ax.get_ylim()[1])\n    projected_traj_artists[0].set_3d_properties(x_trajectory[trajectory_start_pt:, 2])\n    projected_traj_artists[1].set_data(ax.get_xlim()[0], x_trajectory[trajectory_start_pt:, 1])\n    projected_traj_artists[1].set_3d_properties(x_trajectory[trajectory_start_pt:, 2])\n    [ax.draw_artist(projected_traj_artist) for projected_traj_artist in projected_traj_artists]\n\n    # Draw drone art\n    drone_art = draw_drone(x_trajectory[-1, 0:3], x_trajectory[-1, 3:7], quad.x_f, quad.y_f)\n    drone_sketch_artist_x_motor.set_data(drone_art[0][0], drone_art[1][0])\n    drone_sketch_artist_x_motor.set_3d_properties(drone_art[2][0])\n    drone_sketch_artist.set_data(drone_art[0], drone_art[1])\n    drone_sketch_artist.set_3d_properties(drone_art[2])\n    ax.draw_artist(drone_sketch_artist)\n    ax.draw_artist(drone_sketch_artist_x_motor)\n\n    if int_traj is not None:\n        draw_fading_traj(int_traj, int_traj_artists)\n        for int_traj_artist in int_traj_artists:\n            ax.draw_artist(int_traj_artist)\n\n    if sim_traj is not None:\n        draw_fading_traj(sim_traj, sim_traj_artists)\n        for sim_traj_artist in sim_traj_artists:\n            ax.draw_artist(sim_traj_artist)\n\n    if pred_traj is not None:\n        draw_fading_traj(pred_traj, pred_traj_artists)\n        for pred_traj_artist in pred_traj_artists:\n            ax.draw_artist(pred_traj_artist)\n\n    if x_pred_cov is not None:\n        n_std = 3\n        x_std = np.sqrt(x_pred_cov[:, 0, 0]) * n_std\n        y_std = np.sqrt(x_pred_cov[:, 1, 1]) * n_std\n        z_std = np.sqrt(x_pred_cov[:, 2, 2]) * n_std\n        for i, cov_artist in enumerate(cov_artists):\n            center = pred_traj[i+1, 0:3]\n            radii = np.diag(np.array([x_std[i], y_std[i], z_std[i]]))\n            x, y, z = draw_covariance_ellipsoid(center, radii)\n            cov_artist.set_data(x, y)\n            cov_artist.set_3d_properties(z)\n        for cov_artist in cov_artists:\n            ax.draw_artist(cov_artist)\n\n    if follow_quad:\n        ax.set_xlim([x_trajectory[-1, 0] - world_rad, x_trajectory[-1, 0] + world_rad])\n        ax.set_ylim([x_trajectory[-1, 1] - world_rad, x_trajectory[-1, 1] + world_rad])\n        ax.set_zlim([x_trajectory[-1, 2] - world_rad, x_trajectory[-1, 2] + world_rad])\n\n    # fill in the axes rectangle\n    fig.canvas.blit(ax.bbox)\n    \n    \ndef trajectory_tracking_results(t_ref, x_ref, x_executed, u_ref, u_executed, title, w_control=None, legend_labels=None,\n                                quat_error=True):\n\n    if legend_labels is None:\n        legend_labels = ['reference', 'simulated']\n\n    with_ref = True if x_ref is not None else False\n\n    fig, ax = plt.subplots(3, 4, sharex='all', figsize=(7, 9))\n\n    SMALL_SIZE = 8\n    MEDIUM_SIZE = 10\n    BIGGER_SIZE = 12\n\n    plt.rc('font', size=SMALL_SIZE)  # controls default text sizes\n    plt.rc('axes', titlesize=SMALL_SIZE)  # fontsize of the axes title\n    plt.rc('axes', labelsize=MEDIUM_SIZE)  # fontsize of the x and y labels\n    plt.rc('xtick', labelsize=SMALL_SIZE)  # fontsize of the tick labels\n    plt.rc('ytick', labelsize=SMALL_SIZE)  # fontsize of the tick labels\n    plt.rc('legend', fontsize=SMALL_SIZE)  # legend fontsize\n    plt.rc('figure', titlesize=BIGGER_SIZE)  # fontsize of the figure title\n\n    labels = ['x', 'y', 'z']\n    for i in range(3):\n        ax[i, 0].plot(t_ref, x_executed[:, i], label=legend_labels[1])\n        if with_ref:\n            ax[i, 0].plot(t_ref, x_ref[:, i], label=legend_labels[0])\n        ax[i, 0].legend()\n        ax[i, 0].set_ylabel(labels[i])\n    ax[0, 0].set_title(r'$p\\:[m]$')\n    ax[2, 0].set_xlabel(r'$t [s]$')\n\n    q_euler = np.stack([quaternion_to_euler(x_executed[j, 3:7]) for j in range(x_executed.shape[0])])\n    for i in range(3):\n        ax[i, 1].plot(t_ref, q_euler[:, i], label=legend_labels[1])\n    if with_ref:\n        ref_euler = np.stack([quaternion_to_euler(x_ref[j, 3:7]) for j in range(x_ref.shape[0])])\n        q_err = []\n        for i in range(t_ref.shape[0]):\n            q_err.append(q_dot_q(x_executed[i, 3:7], quaternion_inverse(x_ref[i, 3:7])))\n        q_err = np.stack(q_err)\n\n        for i in range(3):\n            ax[i, 1].plot(t_ref, ref_euler[:, i], label=legend_labels[0])\n            if quat_error:\n                ax[i, 1].plot(t_ref, q_err[:, i + 1], label='quat error')\n    for i in range(3):\n        ax[i, 1].legend()\n    ax[0, 1].set_title(r'$\\theta\\:[rad]$')\n    ax[2, 1].set_xlabel(r'$t [s]$')\n\n    for i in range(3):\n        ax[i, 2].plot(t_ref, x_executed[:, i + 7], label=legend_labels[1])\n        if with_ref:\n            ax[i, 2].plot(t_ref, x_ref[:, i + 7], label=legend_labels[0])\n        ax[i, 2].legend()\n    ax[0, 2].set_title(r'$v\\:[m/s]$')\n    ax[2, 2].set_xlabel(r'$t [s]$')\n\n    for i in range(3):\n        ax[i, 3].plot(t_ref, x_executed[:, i + 10], label=legend_labels[1])\n        if with_ref:\n            ax[i, 3].plot(t_ref, x_ref[:, i + 10], label=legend_labels[0])\n        if w_control is not None:\n            ax[i, 3].plot(t_ref, w_control[:, i], label='control')\n        ax[i, 3].legend()\n    ax[0, 3].set_title(r'$\\omega\\:[rad/s]$')\n    ax[2, 3].set_xlabel(r'$t [s]$')\n\n    plt.suptitle(title)\n\n    if u_ref is not None and u_executed is not None:\n        ax = plt.subplots(1, 4, sharex=\"all\", sharey=\"all\")[1]\n        for i in range(4):\n            ax[i].plot(t_ref, u_ref[:, i], label='ref')\n            ax[i].plot(t_ref, u_executed[:, i], label='simulated')\n            ax[i].set_xlabel(r'$t [s]$')\n            tit = 'Control %d' % (i + 1)\n            ax[i].set_title(tit)\n            ax[i].legend()\n\n    dir_path = os.path.dirname(os.path.realpath(__file__))\n    img_save_dir = dir_path + '/../../results/images/'\n    safe_mkdir_recursive(img_save_dir, overwrite=False)\n    fig.savefig(img_save_dir + 'mse_exp', dpi=None, facecolor='w', edgecolor='w',\n                orientation='portrait',\n                transparent=False, pad_inches=0.1)\n\n\ndef mse_tracking_experiment_plot(v_max, mse, traj_type_vec, train_samples_vec, legends, y_labels, t_opt=None,\n                                 font_size=16):\n\n    # Check if there is the variants dimension in the data\n    if len(mse.shape) == 4:\n        variants_dim = mse.shape[3]\n    else:\n        variants_dim = 1\n\n    fig, axes = plt.subplots(variants_dim, len(traj_type_vec), sharex='col', sharey='none',\n                             figsize=(17, 2.5 * variants_dim + 2))\n    if variants_dim == 1 and len(traj_type_vec) > 1:\n        axes = axes[np.newaxis, :]\n    elif variants_dim == 1:\n        axes = np.expand_dims(axes, 0)\n        axes = np.expand_dims(axes, 0)\n    elif len(traj_type_vec) == 1:\n        axes = axes[:, np.newaxis]\n\n    for seed_id, track_seed in enumerate(traj_type_vec):\n        for j in range(variants_dim):\n            for i, _ in enumerate(train_samples_vec):\n                mse_data = mse[seed_id, :, i, j] if len(mse.shape) == 4 else mse[seed_id, :, i]\n                label = legends[i] if seed_id == 0 and j == 0 else None\n                if legends[i] == 'perfect':\n                    axes[j, seed_id].plot(v_max[seed_id, :], mse_data, '--o', linewidth=4, label=label)\n                else:\n                    axes[j, seed_id].plot(v_max[seed_id, :], mse_data, '--o', label=label)\n            if seed_id == 0:\n                axes[j, seed_id].set_ylabel(y_labels[j], size=font_size)\n            if j == 0:\n                axes[j, seed_id].set_title('RMSE [m] | ' + str(track_seed), size=font_size+2)\n\n            axes[j, seed_id].grid()\n            axes[j, seed_id].tick_params(labelsize=font_size)\n\n        axes[variants_dim - 1, seed_id].set_xlabel('max vel [m/s]', size=font_size)\n\n    legend_cols = len(train_samples_vec)\n    fig.legend(loc=\"upper center\", fancybox=True, borderaxespad=0.05, ncol=legend_cols, mode=\"expand\",\n               fontsize=font_size - 4)\n    plt.tight_layout(h_pad=1.4)\n    plt.subplots_adjust(top=0.7 + 0.05 * variants_dim)\n\n    dir_path = os.path.dirname(os.path.realpath(__file__))\n    img_save_dir = dir_path + '/../../results/images/'\n    safe_mkdir_recursive(img_save_dir, overwrite=False)\n\n    try:\n        tikzplotlib.save(img_save_dir + \"mse.tex\")\n    except:\n        pass\n    fig.savefig(img_save_dir + 'mse', dpi=None, facecolor='w', edgecolor='w',\n                orientation='portrait',\n                transparent=False, pad_inches=0.1)\n\n    if t_opt is None:\n        return\n\n    v = v_max.reshape(-1)\n    ind_v = np.argsort(v, axis=0)\n\n    fig = plt.figure(figsize=(17, 4.5))\n    for i, n_train in enumerate(train_samples_vec):\n        plt.plot(v[ind_v], t_opt.reshape(t_opt.shape[0] * t_opt.shape[1], -1)[ind_v, i], label=legends[i])\n    fig.legend(loc=\"upper center\", fancybox=True, borderaxespad=0.05, ncol=legend_cols, mode=\"expand\",\n               fontsize=font_size)\n    plt.ylabel('Mean MPC loop time (s)', fontsize=font_size)\n    plt.xlabel('Max vel [m/s]', fontsize=font_size)\n\n    try:\n        tikzplotlib.save(img_save_dir + \"t_opt.tex\")\n    except:\n        pass\n    fig.savefig(img_save_dir + 't_opt', dpi=None, facecolor='w', edgecolor='w',\n                orientation='portrait',\n                transparent=False, bbox_inches=None, pad_inches=0.1)\n\n\ndef load_past_experiments():\n\n    metadata_file, mse_file, v_file, t_opt_file = get_experiment_files()\n\n    try:\n        with open(metadata_file) as json_file:\n            metadata = json.load(json_file)\n    except:\n        metadata = None\n\n    mse = np.load(mse_file)\n    v = np.load(v_file)\n    t_opt = np.load(t_opt_file)\n\n    return metadata, mse, v, t_opt\n\n\ndef get_experiment_files():\n    results_path = PathConfig.RESULTS_DIR\n    metadata_file = os.path.join(results_path, 'experiments', 'metadata.json')\n    mse_file = os.path.join(results_path, 'experiments', 'mse.npy')\n    mean_v_file = os.path.join(results_path, 'experiments', 'mean_v.npy')\n    t_opt_file = os.path.join(results_path, 'experiments', 't_opt.npy')\n\n    if not os.path.exists(metadata_file):\n        safe_mknode_recursive(os.path.join(results_path, 'experiments'), 'metadata.json', overwrite=False)\n    if not os.path.exists(mse_file):\n        safe_mknode_recursive(os.path.join(results_path, 'experiments'), 'mse.npy', overwrite=False)\n    if not os.path.exists(mean_v_file):\n        safe_mknode_recursive(os.path.join(results_path, 'experiments'), 'mean_v.npy', overwrite=False)\n    if not os.path.exists(t_opt_file):\n        safe_mknode_recursive(os.path.join(results_path, 'experiments'), 't_opt.npy', overwrite=False)\n\n    return metadata_file, mse_file, mean_v_file, t_opt_file\n"
  }
]